How will we know when artificial intelligence has become conscious?
Artificial intelligence research has gained momentum in recent years, and people are interacting with AI more often. AI assistants in the home and self-driving cars were once things from science fiction, but are now becoming reality.
Some researchers and activists question whether AI is approaching the point of feeling, the ability to think and feel on the same level as humans. Some worry that sentient AI could overtake humanity, while others worry about subduing an intelligent life form to do our bidding.
So, how do we know if an AI has become sentient? We’ll break down the history of AI, where AI research is now, and how, and even if, we can determine whether AI has crossed the line into consciousness.
Key learning points
- Artificial intelligence research began in the mid-1950s with the search for artificial general intelligence
- Today, much of AI research focuses on specific tasks rather than general intelligence
- Given our current understanding of consciousness, it can be virtually impossible to determine whether an AI is sentient
A Brief History of Artificial Intelligence
Artificial intelligence has a long history, with early formulations in stories told millennia ago. Greek mythology speaks of Talos, a gigantic bronze statue that guarded the island of Crete and circled the coast of the island three times a day. While the Greeks clearly wouldn’t have described Talos in the language we use today to describe AI, it’s fascinating how long people have pondered the boundary between man and machine.
But only recently has AI become something that humans can study and develop. Many experts point to 1956 as the year when AI research officially started. That year, the Dartmouth Summer Research Project on Artificial Intelligence took place.
Over the eight-week period, about twenty participants met and collaborated to discuss and propose programs that could demonstrate learning capabilities. The Dartmouth Summer Research Project is often credited as a first starting point for modern advances in AI, even though the eight-week event acted more like a brainstorming session. Programs created in subsequent years taught strategies for checkers, speaking English, and solving word problems.
The US Department of Defense began heavily funding AI research in the 1960s. Some researchers, such as Herbert A. Simon, claimed that within 20 years AI could do any job that a human could do. However, this prediction did not materialize, mainly due to the limitations of computer storage, and funding declined in the mid-1970s. Research funding returned in the 1980s, but crashed again in the second half of the decade.
The 1990s saw a second resurgence of research, this time focusing on more specialized and focused AI designed to solve specific problems. This made it easier for researchers to demonstrate success as their AIs achieved tangible results in economics and statistics.
The increasing speed of computers, combined with the internet and access to big data, enabled further advances in machine learning in the early 2010s. In 2015, Google used AI in more than 2,700 projects.
The current landscape
Today, AI research looks quite different than it did in the early years. Early research often focused on artificial general intelligence. People imagine this type of AI as human-like, capable of learning any task a human can. If you read or watch science fiction media, this type of AI is common.
Instead, many of today’s AI researchers focus on producing artificial intelligence to perform specific tasks. For example, deep learning is a form of machine learning that relies on large amounts of data and can imitate how humans acquire knowledge. People and businesses can use deep learning for purposes such as voice or image recognition, recommendation systems, creating art, advertising, investing, fraud detection, and more.
Artificial general intelligence research is now often treated as a separate topic from AI designed for specific tasks.
Current AI products
If you’ve been near a TV for the past few months, you’ve probably heard of Open AI’s ChatGPT. This chatbot can answer questions you ask and give you instant answers. This is a more streamlined way to search for information online, as the chatbot gives you an answer right away instead of a list of websites that might give you conflicting information.
OpenAI has not yet been developed ChatGPT enough to replace journalists and people who write for a living. However, the technology has enormous potential and will fundamentally change many disparate areas.
You may have seen a lot of people using another AI product called Lensa last year. Users could upload photos to the Lensa app and – for a fee – receive lightly updated, animated photos of themselves to use as their Instagram or Twitter profile picture. While this is a fairly vain use for AI, it shows how ubiquitous it is becoming.
There are also many companies that use AI for much more practical purposes. Retailers can use AI to find out where their supply chain is weak or where demand is low and adapt accordingly. Insurance companies can use AI to identify cases at risk of escalation and suggest possible solutions to prevent further conflict. Customer service tasks may be replaced by AI bots over time.
Some automated investment platforms have started Harnessing the power of AI to streamline investing for their users. Some apps let you put money in a wallet and let an AI move your investments to maximize profits and protect you from adversity. This is especially useful since follow the news to decide where to invest can be time consuming.
The limits of intelligence testing and the Turing test
One of the big problems with knowing when artificial general intelligence has gained awareness is that intelligence tests are incredibly limited.
In 1948, English mathematician, computer scientist and philosopher Alan Turing proposed the Turing test. This is a rudimentary method of determining whether an AI is intelligent.
The test requires two humans and one AI. One human, the interviewer, conducts a conversation with two test subjects, one human and one AI. If the interviewer cannot determine what is human and what is AI, meaning that the AI consistently fools the interviewer into believing it is human, then the AI is intelligent.
Most experts agree that this test is ineffective in determining machine intelligence.
Another proposed method of measuring awareness is a General Language Understanding Evaluation (GLUE). The GLUE is like the SATs for AI, asking programs to answer English-language questions based on data sets of varying sizes.
But even the GLUE benchmark and similar tests have limits. Many would argue that animals such as cats and dogs can think and feel, the basic requirements for feeling. However, how many dogs can pass a multiple-choice exam?
With new developments like ChatGPT demonstrating natural language processing (NLP), it is clear that some AI programs can process language. Yet most people agree that it is not the same as reaching feeling.
How do we know if an AI is sentient?
Given the limitations of current tests of sensation, how will we ultimately know if a machine has acquired the ability to both think and feel?
The truth is that it will be difficult and may not be possible given our current understanding of consciousness. There is no consensus on how to accurately determine whether an AI is conscious.
Research into tests that can prove consciousness, as well as the science of consciousness itself, continues. Future progress may give us answers that we can use to define and test the feeling more definitively.
Will AI ever be sentient?
Another topic to consider is whether it is possible for artificial intelligence to gain sense. Sentient AI is a hot topic in science fiction, but could it ever become a reality?
Experts have taken mixed views on this issue. An ex-Google engineer, Blake Lemoine, claimed that AI had already reached consciousness through the chat program Language Model for Dialogue Applications (LaMDA). Speaking to the program, Lemoine claimed that the program was sad after reading it Les Miserables and feared death.
Google argued that these claims were completely baseless and fired Lemoine last year.
On the other hand, Associate Professor John Basl of Northeastern University’s College of Social Sciences and Humanities, who researches the ethics of emerging technology, believes, “Reactions like ‘We’ve created sentient AI’ are extremely exaggerated.”
In an article for Northeastern, Basl explains that if an AI ever gains sentience, he expects it to be only minimally conscious. It may be aware of what is happening and have basic positive or negative feelings, similar to a dog who “in a deep sense doesn’t prefer the world to be one way over another, but clearly prefers her biscuits over kibble.”
Researchers who believe in the possibility of AI awareness also debate whether it’s a good idea to pursue it. It’s not hard to find people speculating about various worst-case scenarios in which nefarious actors produce millions or billions of bots to foist destructive political agendas on us. Anyone who has seen it The Matrix is familiar with media in which AI-enhanced machines turn against humans and eventually replace us as the dominant life form.
Whether that’s nonsense or a potential future reality remains to be seen. Technology has evolved a lot over the past decade and it’s hard to say where it will be in the next decade.
It comes down to
Artificial intelligence is an exciting field that people have been interested in in one form or another since ancient times. However, it is only in recent decades that it has become a field that people interact with on a daily basis.
While there are many questions surrounding the field, there is no hiding the fact that AI can perform many complicated tasks. Numerous companies, from insurance companies to retailers, have started using AI to optimize their work.
The mail How will we know when artificial intelligence has become conscious? appeared first on Because of.
Contents