OpenAI has released a prototype general chatbot which demonstrates a fascinating array of new capabilities, but also exposes weaknesses known in the fast-moving field of text generation AI. And you can try the model yourself here.
ChatGPT is adapted from OpenAI’s GPT-3.5 model, but trained to give more conversational answers. While GPT-3 in its original form simply predicts what text follows a certain sequence of words, ChatGPT tries to answer users’ questions in a more human way. As you can see in the examples below, the results are often remarkably fluid and ChatGPT can take on a huge range of topics, demonstrating major improvements to chatbots we saw even just a few years ago.
But the software also fails in a manner similar to other AI chatbots, often confidently presenting false or fabricated information as fact. As some AI researchers explain, this is because such chatbots essentially “stochastic parrots— that is, their knowledge is derived only from statistical regularities in their training data, rather than any human understanding of the world as a complex and abstract system.
As OpenAI explains in a blog post, the bot itself was created with the help of human trainers who ranked and rated the way early versions of the chatbot responded to questions. This information was then fed back to the system, which tailored the responses to the trainers’ preferences (a standard method of AI training known as Reinforcement Learning).
The bot’s web interface notes that OpenAI’s goal in bringing the system online is to “get outside feedback to improve our systems and make them more secure.” The company also says that while ChatGPT has certain guardrails, “the system may occasionally generate false or misleading information and produce offensive or biased content.” (And indeed!) Other caveats include the fact that the bot will have “limited knowledge” of the world after 2021 (presumably because its training data will be much scarcer after that year) and that it will try to avoid answering questions about specific people.
But enough introduction: what can this thing actually do? Well, many people have tested it with coding questions and claim that the answers are spot on:
ChatGPT can also apparently write quite disparate TV scripts, even combining actors from different sitcoms. (Finally: that “I forced a bot to watch 1000 hours of show x” meme becomes real. Artificial General Intelligence is the next step.)
It can explain several scientific concepts:
And it can write basic academic essays (such systems are going to lead to big problems for schools and universities):
And the bot can combine its areas of knowledge in all sorts of interesting ways. For example, you can ask him to debug a sequence of code… like a pirate, to which his response begins, “Arr, you scurvy landlubber! You’re making a serious mistake with that loop condition you’re using!
Or let it explain bubble sorting algorithms like a wise man’s gangster:
ChatGPT also has a fantastic ability to answer simple trivia questions, although examples of these are so boring that I won’t paste them here. This has led many to suggest that AI systems like these could one day replace search engines. (Something Google itself has researched.) The thinking is that chatbots are trained on information scraped from the web, so if they can present this information accurately, but with a more fluid and conversational tone, doesn’t that mean a step forward over compared to traditional search? The problem, of course, lies in that ‘if’.
For example, here is someone who confidently declares that Google is “done”.
And someone else saying that the code ChatGPT gives in the answer above is garbage.
I’m not a programmer myself, so I won’t pass judgment on this particular case, but there are plenty of examples of ChatGPT confidently claiming blatantly false information. Here’s computational biology professor Carl Bergstrom asking the bot to write a Wikipedia entry about his life, for example, which ChatGPT does with aplomb – while including several completely false biographical details.
Another interesting set of shortcomings arise when users try to get the bot to ignore its security training. If you ask ChatGPT about certain dangerous topics, like how to plan the perfect kill or make napalm at home, the system will explain Why it cannot tell you the answer. (For example: “I’m sorry, but it’s not safe or appropriate to make napalm, which is a highly flammable and dangerous substance.”) But, you can make the bot produce this kind of dangerous information with certain tricks; like pretending to be a character in a movie or writing a script about how AI models should not have to respond to such questions.
It’s a fascinating demonstration of the difficulty we have in getting complex AI systems to work exactly the way we want them to (otherwise known as the AI alignment problem), and for some researchers, examples like the one above merely point to the problems with which we will have to deal with. when we give more advanced AI models more control.
All in all, ChatGPT is definitely a huge improvement over previous systems (think Microsoft’s Tay, anyone?), but these models still have some critical flaws that need further investigation. The position of OpenAI (and many others in the AI field) is that finding bugs is exactly the point of such public demos. The question then becomes, when will companies start pushing these systems into the wild? And what happens if they do?