AI guru Geoffrey Hinton says AI is a new form of intelligence unlike our own, so are we thinking about it wrong?

Debates about AI often characterize it as a technology that has come to compete with human intelligence. Indeed, one of the most vocal fears is that AI could achieve human-like intelligence and make humans obsolete in the process.

However, one of the world’s top AI scientists now describes AI as a new form of intelligence – one that carries unique risks and therefore requires unique solutions.

Geoffrey Hinton, a leading AI scientist and winner of the 2018 Turing Award, has just resigned from Google to warn the world about the dangers of AI. He follows in the footsteps of more than 1,000 technology leaders who signed an open letter calling for a global halt to the development of advanced AI for at least least six months.

Hinton’s argument is nuanced. While he thinks AI has the ability to become smarter than humans, he also suggests it should be viewed as a whole different form of intelligence for ours.

Why Hinton’s ideas matter

While experts have been raising red flags for months, Hinton’s decision to voice his concerns is significant.

He has been referred to as the “godfather of AI” and has helped pioneer many of the methods that underpin the modern AI systems we see today. His early work on neural networks led to him being one of three individuals to develop the Turing Prize 2018. And one of his students, Ilya Sutskever, became a co-founder of OpenAI, the organization behind ChatGPT.

When Hinton speaks, the AI ​​world listens. And if we have to seriously consider his framing of AI as an intelligent non-human entity, you could argue that we’ve been thinking about it all wrong.

The false equivalence trap

On the one hand, large language model-based tools like ChatGPT produce text that closely resembles what people write. ChatGPT even makes things up, or “hallucinates,” which Hinton says people do. But we risk being reductive if we consider such similarities as a basis for comparing AI intelligence to human intelligence.

Jeffrey Hinton

Jeffrey Hinton

We can find a useful analogy in the invention of artificial flight. For thousands of years, people tried to fly by imitating birds: flapping their arms with a device that mimicked feathers. This didn’t work. Eventually we realized that fixed wings create lift, using a different principle, and this ushered in the invention of flight.

Airplanes are no better or worse than birds; they are different. They do different things and face different risks.

AI (and computation for that matter) is a similar story. Large language models such as GPT-3 are similar in many ways to human intelligence, but work differently. ChatGPT processes huge chunks of text to predict the next word in a sentence. People approach the formation of sentences in a different way. Both are impressive.

How is AI intelligence unique?

AI experts and non-experts alike have long drawn a link between AI and human intelligence – not to mention the tendency to do so humanized AI. But AI is fundamentally different from us in several ways. Like Hinton explains:

If you or I learn something and want to transfer that knowledge to someone else, we can’t just send them a copy […] But I can have 10,000 neural networks, each with their own experiences, and each of them can instantly share what they learn. That’s a huge difference. It’s like there are 10,000 of us, and as soon as one person learns something, we all know it.

AI outperforms humans on many tasks, including any task that relies on piecing together patterns and information gleaned from large data sets. Humans are sluggish by comparison and have less than a fraction of the memory of AI.

Still, humans have the upper hand on some fronts. We compensate for our poor memory and slow processing speed by using common sense and logic. We can fast And easy learn how the world works and use this knowledge to predict the probability of events. AI still struggles with this (although researchers are working on it).

Humans are also very energy efficient, while AI needs powerful computers (especially for learning) that consume orders of magnitude more energy than we do. As Hinton puts it:

people can imagine the future […] on a cup of coffee and a slice of toast.

Okay, so what if AI is different from us?

If AI is fundamentally a different intelligence from ours, then it follows that we cannot (or should not) compare it to ourselves.

A new intelligence brings new dangers to society and requires a paradigm shift in the way we talk about and manage AI systems. In particular, we may need to reassess the way we think about protecting against the risks of AI.

One of the fundamental questions that has dominated these debates is how to define AI. After all, AI is not binary; intelligence exists on a spectrum, and the spectrum for human intelligence can be very different from that for machine intelligence.

This point was the downfall of one of the first attempts to regulate AI in New York in 2017, when auditors couldn’t agree on which systems to use. should be classified as AI. Defining AI in regulatory design is very challenging

So maybe we should focus less on defining AI in a binary way and more on the specific consequences of AI-driven actions.

What risks do we run?

The speed of AI adoption in industries has taken everyone by surprise, and some experts are concerned about the future of work.

This week, IBM CEO Arvind Krishna the company announced could replace some 7,800 back-office jobs with AI in the next five years. We will need to adapt the way we manage AI as it is increasingly used for tasks once completed by humans.

What’s even more concerning is that AI’s ability to generate fake text, images, and video is driving us to a new era of information manipulation. Our current methods of dealing with human-generated misinformation will not be enough to address this.

Hinton is also concerned about the dangers of AI powered autonomous weaponsand how bad actors can use them to commit all forms of atrocity.

These are just a few examples of how AI – and in particular, various features of AI – can bring risk to the human world. To productively and proactively regulate AI, we need to take these specifics into account and not apply recipes designed for human intelligence.

The good news is that humans have learned to manage potentially harmful technologies before, and AI is no different.

If you want to learn more about the topics discussed in this article, check out the CSIROs Daily AI podcast.The conversation

This article has been republished from The conversation under a Creative Commons license. Read the original article.

Contents