Here’s what you need to know about artificial intelligence
In August 1955, a group of scientists put forward a $13,500 funding request to organize a summer workshop at Dartmouth College, New Hampshire. The area they suggested exploring was artificial intelligence (AI).
Although the funding request was modest, the researchers’ suspicion was not: “any aspect of learning or any other feature of intelligence can, in principle, be described with such precision that a machine can be made to simulate it”.
Since these humble beginnings, movies and media have either romanticized AI or cast it as a villain. Yet AI has remained a matter of debate for most people and not part of a consciously lived experience.
AI has arrived in our lives
Late last month, AI, in the form of ChatGPT, broke free from the sci-fi speculation and research labs and onto the desktops and phones of the general public. It’s what’s known as a “generative AI” – suddenly a cleverly worded prompt can yield an essay or compile a recipe and shopping list, or create a poem in the style of Elvis Presley.
While ChatGPT has been the most dramatic newcomer in a year of generative AI success, similar systems have shown even greater potential for creating new content, with text-to-image prompts used to create vivid graphics that have even won art competitions.
AI may not yet have living consciousness or a theory of mind popularized in sci-fi movies and novels, but it’s getting closer to at least disrupting what we think artificial intelligence systems can do.
Researchers who work closely with these systems have swooned the prospect of feeling, as in the case of Google’s large language model (LLM) LaMDA. An LLM is a model trained to process and generate natural language.
Generative AI has also raised concerns about plagiarism, exploitation of original content used to create models, ethics of information manipulation and breach of trust, and even “the end of programming”.
At the heart of all this is the question that has become increasingly urgent since the summer workshop in Dartmouth: Is AI different from human intelligence?
What does “AI” actually mean?
To qualify as AI, a system must exhibit a certain level of learning and adaptability. For this reason, decision-making systems, automation, and statistics are not AI.
AI is broadly defined in two categories: artificial narrow intelligence (ANI) and artificial general intelligence (AGI). To date, AGI does not exist.
The main challenge for creating a general AI is to adequately model the world with all the knowledge, in a consistent and usable way. That is a huge undertaking to say the least.
Most of what we know as AI today has limited intelligence – with a particular system tackling a particular problem. Unlike human intelligence, such limited AI intelligence is effective only in the area it has been trained in: for example fraud detection, facial recognition or social recommendations.
However, AGI would function as humans do. For now, the most notable example of trying to achieve this is using neural networks and “deep learning” trained on massive amounts of data.
Neural networks are inspired by the way the human brain works. Unlike most machine learning models that perform computations on the training data, neural networks work by passing each data point through an interconnected network one at a time, adjusting the parameters each time.
As more and more data is passed through the network, the parameters stabilize; the final result is the “trained” neural network, which can then produce the desired output based on new data, such as recognizing whether an image contains a cat or a dog.
The significant leap forward in AI today is driven by technological improvements in how we can train large neural networks, re-adjusting large numbers of parameters in each run thanks to the capabilities of large cloud computing infrastructures. For example, GPT-3 (the AI system that powers ChatGPT) is a large neural network with 175 billion parameters.
What does AI need to work?
AI needs three things to be successful.
First, it needs high-quality, unbiased data, and a lot. Researchers building neural networks use the large datasets created by the digitization of society.
Co-Pilot, for empowering human programmers, gets its data from billions of lines of code shared on GitHub. ChatGPT and other major language models use the billions of websites and text documents stored online.
Text-to-image tools, such as Stable Diffusion, DALLE-2, and Midjourney, use image-text pairs from datasets such as LAION-5B. AI models will continue to evolve in sophistication and impact as we digitize more of our lives and provide them with alternative data sources, such as simulated data or data from game settings such as Minecraft.
AI also needs computing infrastructure for effective training. As computers become more powerful, models that now require intensive effort and large-scale computing may be handled locally in the near future. For example, Stable Diffusion can already run on local computers instead of cloud environments.
The third need for AI is improved models and algorithms. Data-driven systems are still advancing rapidly in the world domain after domain once considered the territory of human cognition.
However, as the world around us is constantly changing, AI systems must constantly be trained using new data. Without this critical step, AI systems will produce answers that are factually incorrect, or fail to account for new information that has emerged since they were trained.
Neural networks are not the only approach to AI. Another prominent camp in artificial intelligence research is symbolic AI – instead of processing huge datasets, it relies on rules and knowledge similar to the human process of forming internal symbolic representations of certain phenomena.
But the balance of power has tilted sharply towards data-driven approaches over the past decade, with the “founding fathers” of modern deep learning recently awarded the Turing Prizethe equivalent of the Nobel Prize in computer science.
Data, calculations and algorithms form the basis of the future of AI. Everything indicates that rapid progress will be made in all three categories in the near future.
- George Siemensco-director, professor, Center for Change and Complexity in Learning, University of South Australia
This article has been republished from The conversation under a Creative Commons license. Read the original article.
Contents