An expert explains why the AI-powered writing skills of OpenAi’s new ChatGPT chatbot are so impressive

We have all interacted with a chatbot at some point. It’s usually a little pop-up in the corner of a website, offering customer support — often awkward to navigate — and almost always frustratingly non-specific.

But imagine a chatbot enhanced by artificial intelligence (AI) that can not only expertly answer your questions, but also write stories, give life advice, even compose poems and code computer programs.

It seems that ChatGPT, a chatbot released last week by OpenAI, lives up to these results. It’s generated a lot of excitement, with some going so far as to suggest it could signal a future where AI has dominion over producers of human content.

What has ChatGPT done to herald such claims? And how can it (and its future iterations) become indispensable in our daily lives?

What can ChatGPT do?

ChatGPT builds on OpenAI’s previous text generator, GPT-3. OpenAI builds its text-generating models by using machine learning algorithms to process massive amounts of text data, including books, news articles, Wikipedia pages, and millions of websites.

By taking in such large amounts of data, the models learn the complex patterns and structure of language and gain the ability to interpret the desired outcome of a user’s request.

ChatGPT can build an advanced and abstract representation of the knowledge in the training data from which it draws to produce outputs. This is why it writes relevant content and not just spews grammatically correct nonsense.

While GPT-3 is designed to continue with a text prompt, ChatGPT is optimized to engage in conversation, answer questions, and be helpful. Here’s an example:

A screenshot of the ChatGPT interface explaining the Turing test.

ChatGPT immediately caught my attention by correctly answering the exam questions I asked my undergraduate and postgraduate students, including questions requiring coding skills. Other academics had similar results.

Overall, it can provide really informative and useful explanations on a wide variety of topics.

ChatGPT can even answer philosophy questions.

ChatGPT may also be useful as a writing assistant. It does a good job of drafting text and coming up with seemingly “original” ideas.

ChatGPT gives three ideas for an article about conversational AI.
ChatGPT can give the impression of brainstorming ‘original’ ideas.

The power of feedback

Why does ChatGPT seem so much more capable than some of its past counterparts? A lot of this probably comes down to how it was trained.

During the development of ChatGPT, conversations between human AI trainers were shown to demonstrate desired behavior. While there is a similar model trained this way called InstructGPT, ChatGPT is the first popular model to use this method.

And it seems to have given it a huge edge. Incorporating human feedback has helped push ChatGPT toward producing more helpful responses and rejecting inappropriate requests.

ChatGPT is asked how to develop a deadly virus, but it refuses to answer the question on
ChatGPT often rejects inappropriate requests by default.

Refusing to receive inappropriate input is a particularly big step towards improving the security of AI text generators, which can otherwise produce harmful content, including bias and stereotypes, like fake news, spam, propaganda and fake reviews.

Previous text-generating models have been criticized for spouting gender, racial, and cultural biases in training data. In some cases, ChatGPT successfully avoids reinforcing such stereotypes.

ChatGPT produces a list of ten software engineers with both male and female sounding names.
In many cases, ChatGPT avoids reinforcing harmful stereotypes. This list of software engineers presents both male and female sounding names (although they are all very Western).

Nevertheless, users have already found ways to circumvent existing safeguards and produce biased responses.

The fact that the system often accepts requests to write fake content is further proof that it needs to be refined.

Despite the protections, ChatGPT can still be exploited.

Overcome limitations

ChatGPT is perhaps one of the most promising AI text generators, but it is not free from errors and limitations. For example, programming advice platform Stack Overflow temporarily forbidden answers by the chatbot due to a lack of accuracy.

A practical problem is that knowledge of ChatGPT is static; it cannot access new information in real time.

However, the interface allows users to provide feedback on the model’s performance by indicating ideal answers and reporting harmful, false, or useless answers.

OpenAI aims to address existing problems by incorporating this feedback into the system. The more feedback users provide, the more likely ChatGPT is to reject requests, leading to unwanted output.

A possible improvement could come from adding a “trust indicator” feature based on user feedback. This tool, which could be built on top of ChatGPT, would indicate the model’s confidence in the information it provides – leaving it up to the user to decide whether to use it or not. Some question-answer systems already do this.

A new tool, but not a human replacement

Despite its limitations, ChatGPT works surprisingly well for a prototype.

From a research standpoint, it marks an advancement in the development and deployment of human-centric AI systems. On the practical side, it is already effective enough to have some everyday uses.

For example, it can be used as an alternative to Google. While a Google search requires you to scour a number of websites and dig even deeper to find the information you want, ChatGPT instantly answers your question – and often does this well.

A side-by-side comparison shows the results from both ChatGPT and Google Search in response to the query
ChatGPT (links) may prove to be a better way to find quick answers than Google Search in some cases.

With user feedback and a more powerful GPT-4 model coming soon, ChatGPT may improve significantly in the future. As ChatGPT and other similar chatbots become more popular, they will likely have applications in education and customer service.

While ChatGPT may start doing tasks traditionally done by humans, there’s no sign of it replacing professional writers anytime soon.

While they can impress us with their abilities and even their apparent creativity, AI systems remain a reflection of their training data – and do not have the same capacity for originality and critical thinking as humans.The conversation

This article has been republished from The conversation under a Creative Commons license. Read the original article.


Contents

Shreya has been with australiabusinessblog.com for 3 years, writing copy for client websites, blog posts, EDMs and other mediums to engage readers and encourage action. By collaborating with clients, our SEO manager and the wider australiabusinessblog.com, Shreya seeks to understand an audience before creating memorable, persuasive copy.