But imagine a chatbot enhanced by artificial intelligence (AI) that can not only expertly answer your questions, but also write stories, give life advice, even compose poems and code computer programs.
It seems that ChatGPT, a chatbot released last week by OpenAI, lives up to these results. It’s generated a lot of excitement, with some going so far as to suggest it could signal a future where AI has dominion over producers of human content.
What has ChatGPT done to herald such claims? And how can it (and its future iterations) become indispensable in our daily lives?
What’s notable about the response to ChatGPT isn’t just the number of people blown away by it, but who they are. These are not people who get excited about every shiny new thing. Clearly something big is going on.
—Paul Graham (@paulg) December 2, 2022
What can ChatGPT do?
ChatGPT builds on OpenAI’s previous text generator, GPT-3. OpenAI builds its text-generating models by using machine learning algorithms to process massive amounts of text data, including books, news articles, Wikipedia pages, and millions of websites.
By taking in such large amounts of data, the models learn the complex patterns and structure of language and gain the ability to interpret the desired outcome of a user’s request.
ChatGPT can build an advanced and abstract representation of the knowledge in the training data from which it draws to produce outputs. This is why it writes relevant content and not just spews grammatically correct nonsense.
While GPT-3 is designed to continue with a text prompt, ChatGPT is optimized to engage in conversation, answer questions, and be helpful. Here’s an example:
ChatGPT immediately caught my attention by correctly answering the exam questions I asked my undergraduate and postgraduate students, including questions requiring coding skills. Other academics had similar results.
Overall, it can provide really informative and useful explanations on a wide variety of topics.
ChatGPT may also be useful as a writing assistant. It does a good job of drafting text and coming up with seemingly “original” ideas.
The power of feedback
Why does ChatGPT seem so much more capable than some of its past counterparts? A lot of this probably comes down to how it was trained.
During the development of ChatGPT, conversations between human AI trainers were shown to demonstrate desired behavior. While there is a similar model trained this way called InstructGPT, ChatGPT is the first popular model to use this method.
And it seems to have given it a huge edge. Incorporating human feedback has helped push ChatGPT toward producing more helpful responses and rejecting inappropriate requests.
Refusing to receive inappropriate input is a particularly big step towards improving the security of AI text generators, which can otherwise produce harmful content, including bias and stereotypes, like fake news, spam, propaganda and fake reviews.
Previous text-generating models have been criticized for spouting gender, racial, and cultural biases in training data. In some cases, ChatGPT successfully avoids reinforcing such stereotypes.
Nevertheless, users have already found ways to circumvent existing safeguards and produce biased responses.
The fact that the system often accepts requests to write fake content is further proof that it needs to be refined.
ChatGPT is perhaps one of the most promising AI text generators, but it is not free from errors and limitations. For example, programming advice platform Stack Overflow temporarily forbidden answers by the chatbot due to a lack of accuracy.
A practical problem is that knowledge of ChatGPT is static; it cannot access new information in real time.
However, the interface allows users to provide feedback on the model’s performance by indicating ideal answers and reporting harmful, false, or useless answers.
OpenAI aims to address existing problems by incorporating this feedback into the system. The more feedback users provide, the more likely ChatGPT is to reject requests, leading to unwanted output.
a lot of what people assume is that we are censoring ChatGPT is in fact that we are trying to prevent it from making up random facts.
difficult to find the right balance with the current state of technology.
it will get better over time and we will use your feedback to improve it.
— Sam Altman (@sama) December 4, 2022
A possible improvement could come from adding a “trust indicator” feature based on user feedback. This tool, which could be built on top of ChatGPT, would indicate the model’s confidence in the information it provides – leaving it up to the user to decide whether to use it or not. Some question-answer systems already do this.
A new tool, but not a human replacement
Despite its limitations, ChatGPT works surprisingly well for a prototype.
From a research standpoint, it marks an advancement in the development and deployment of human-centric AI systems. On the practical side, it is already effective enough to have some everyday uses.
For example, it can be used as an alternative to Google. While a Google search requires you to scour a number of websites and dig even deeper to find the information you want, ChatGPT instantly answers your question – and often does this well.
With user feedback and a more powerful GPT-4 model coming soon, ChatGPT may improve significantly in the future. As ChatGPT and other similar chatbots become more popular, they will likely have applications in education and customer service.
While ChatGPT may start doing tasks traditionally done by humans, there’s no sign of it replacing professional writers anytime soon.
While they can impress us with their abilities and even their apparent creativity, AI systems remain a reflection of their training data – and do not have the same capacity for originality and critical thinking as humans.
This article has been republished from The conversation under a Creative Commons license. Read the original article.