There’s been a great deal of hype and excitement in the artificial intelligence (AI) world around a newly developed technology known as GPT-3. Put simply; it’s an AI that is better at creating content that has a language structure – human or machine language – than anything that has come before it. GPT-3 has been created by OpenAI, a research business co-founded by Elon Musk and has been described as the most important and useful advance in AI for years.
But there’s some confusion over exactly what it does (and indeed doesn’t do), so here I will try and break it down into simple terms for any non-techy readers interested in understanding the fundamental principles behind it.
I’ll also cover some of the problems it raises, as well as why some people think its significance has been overinflated somewhat by hype.
Starting with the very basics, GPT-3 stands for Generative Pre-trained Transformer 3 – it’s the third version of the tool to be released. In short, this means that it generates text using algorithms that are pre-trained – they’ve already been fed all of the data they need to carry out their task. Specifically, they’ve been fed around 570gb of text information gathered by crawling the internet (a publicly available dataset known as CommonCrawl) along with other texts selected by OpenAI, including the text of Wikipedia.
If you ask it a question, you would expect the most useful response would be an answer. If you ask it to carry out a task such as creating a summary or writing a poem, you will get a summary or a poem. More technically, it has also been described as the largest artificial neural network every created – I will cover that further down.
GPT-3 can create anything that has a language structure – which means it can answer questions, write essays, summarize long texts, translate languages, take memos, and even create computer code. In fact, in one demo available online, it is shown creating an app that looks and functions similarly to the Instagram application, using a plugin for the software tool Figma, which is widely used for app design.
This is, of course, pretty revolutionary, and if it proves to be usable and useful in the long-term, it could have huge implications for the way software and apps are developed in the future.
As the code itself isn’t available to the public yet (more on that later), access is only available to selected developers through an API maintained by OpenAI. Since the API was made available in June this year, examples have emerged of poetry, prose, news reports, and creative fiction.
This article is particularly interesting – where you can see GPT-3 making a – quite persuasive – attempt at convincing us humans that it doesn’t mean any harm