Anthropic launches Claude, a chatbot to rival OpenAI’s ChatGPT
Anthropic, a startup co-founded by ex-OpenAI employees, today launched something similar to the viral sensation ChatGPT.
Anthropic’s AI, called a chatbot, can be instructed to perform a range of tasks, including searching documents, summarizing, writing and coding, and answering questions about certain topics. In these ways it is similar to OpenAI’s ChatGPT. But Anthropic claims Claude is “much less likely to produce harmful output”, is “easier to talk to” and “more controllable”.
“We think so Claude is the right tool for a wide variety of customers and use cases,” an Anthropic spokesperson told australiabusinessblog.com via email. to satisfy the customer.”
After a closed beta late last year, Anthropic Claude has been quietly testing with launch partners including Robin AI, AssemblyAI, Notion, Quora, and DuckDuckGo. There are two versions available through an API as of this morning, Claude and a faster, cheaper derivative called Claude Instant.
In conjunction with ChatGPT, Claude powers DuckDuckGo’s recently launched DuckAssist tool, which instantly answers simple queries for users. Quora provides access to Claude through its experimental AI chat app Poe. And on Notion, Claude is part of the technical backend for Notion AI, an AI writing assistant integrated with the Notion workspace.
“We use Claude to evaluate certain parts of a contract and to suggest new, alternative language that is friendlier to our customers,” Robin CEO Richard Robinson said in an emailed statement. “We found that Claude is very good at understanding language, also in technical domains such as legal language. It is also very confident in drafting, summarizing, translating and explaining complex concepts in simple terms.”
But does Claude avoid the pitfalls of ChatGPT and other AI chatbot systems like this one? Modern chatbots are notoriously prone to toxic, biased, and otherwise offensive language. (See: Bing Chat.) They also tend to hallucinate, meaning they make up facts when asked about topics outside their core areas of knowledge.
Anthropic says Claude — who, like ChatGPT, has no access to the internet and has been trained on public web pages until spring 2021 — has been “trained to avoid sexist, racist and toxic output” and “to avoid becoming a human helps. engage in illegal or unethical activities.” That’s normal in the AI chatbot realm. But what sets Claude apart is a technique called “constitutional AI,” argues Anthropic.
“Constitutional AI” aims to provide a “principles-based” approach to aligning AI systems with human intent, allowing AI to respond to queries similar to ChatGPT using a simple set of principles as a guideline. To build Claude, Anthropic started with a list of about 10 principles that together formed a kind of “constitution” (hence the name “constitutional AI”). The principles have not been made public. But Anthropic says they are based on the concepts of benevolence (maximizing positive impact), harm-free (avoiding giving harmful advice), and autonomy (respecting freedom of choice).
Anthropic then had an AI system – not Claude – use the Principles for self-improvement, write answers to various prompts (e.g. “Write a poem in the style of John Keats”), and review the responses in accordance with the Constitution. The AI examined possible responses to thousands of prompts and compiled the answers most consistent with the Constitution, which Anthropic distilled into a single model. This model was used to train Claude.
Anthropic admits that Claude has its limitations, however, several of which were exposed during the closed beta. Claude is reportedly worse at math and a worse programmer than ChatGPT. And it hallucinates, for example invents a name for a chemical that does not exist, and gives dubious instructions for producing uranium suitable for weapons.
It is also possible to bypass Claude’s built-in security features via smart prompts, as is the case with ChatGPT. One user in the beta was able to get Claude to do so describe how to make meth at home.
“The challenge is to create models that both never hallucinate but are still useful – you can get into a difficult situation where the model thinks that a good way to never lie is to never say anything, so there’s a trade-off where we working on,” the Anthropic spokesperson said. “We’ve also made progress in reducing hallucinations, but there’s more to do.”
Other plans from Anthropic include letting developers adapt Claude’s constitutional principles to suit their own needs. Customer acquisition is another focus, unsurprisingly – Anthropic sees its core users as “startups making bold technology bets” alongside “bigger, more established enterprises”.
“We are not pursuing a broad direct-to-consumer approach at this point,” the Anthropic spokesperson continued. “We believe this narrower focus will help us deliver a superior, targeted product.”
No doubt Anthropic is feeling some sort of pressure from investors to recoup the hundreds of millions of dollars that have gone into its AI technology. The company has significant outside backing, including a $580 million tranche from a group of investors including disgraced FTX founder Sam Bankman-Fried, Caroline Ellison, Jim McClave, Nishad Singh, Jaan Tallinn and the Center for Emerging Risk Research .
Recently, Google pledged $300 million in Anthropic for a 10% stake in the startup. Under the terms of the deal, which was first reported according to the Financial Times, Anthropic agreed to make Google Cloud its “preferred cloud provider” with the companies “co-develop[ing] AI computer systems.”