Is it time to slow down the development of artificial intelligence (AI)? If you’ve been quietly asking yourself that question, you’re not alone.
In the past week, a large number of AI stars signed one open letter calling for a six-month break in the development of more powerful models than GPT-4; European researchers called for stricter AI regulations; and longtime AI researcher and critic Eliezer Yudkowsky demanded a complete halt to AI development in the pages of TIME magazine.
Meanwhile, the industry shows no signs of slowing down. In March, a senior AI manager at Microsoft Reportedly spoke of “very, very high” pressure from CEO Satya Nadella to bring GPT-4 and other new models to the public “at very high speed”.
I worked at Google until 2020, when I left to study responsible AI development, and now I’m researching creative collaboration between humans and AI. I am excited about the potential of artificial intelligence and I believe it is already ushering in a new era of creativity. However, I think a temporary pause in the development of more powerful AI systems is a good idea. Let me explain why.
What is GPT-4 and what does the letter ask for?
The open letter from the American non-profit organization Future of Life Institute makes a clear request to AI developers:
We call on all AI labs to immediately pause training AI systems that are more powerful than GPT-4 for at least 6 months.
So what is GPT-4? Like its predecessor GPT-3.5 (which powers the popular ChatGPT chatbot), GPT-4 is a kind of generative AI software called a “big language model” developed by OpenAI.
GPT-4 is much larger and is trained on significantly more data. Like other major language models, GPT-4 works by guessing the next word in response to prompts – but it’s incredibly capable nonetheless.
In tests, it passed legal and medical exams, and in many cases can write software better than professionals. And the full range of possibilities is yet to discover.
Good, bad and downright disturbing
GPT-4 and similar models are likely to have huge effects in many layers of society.
On the positive side, they can enhance human creativity and scientific discovery, lower learning barriers, and are used in personalized learning resources. On the other hand, they can enable personalized phishing attacks, produce disinformation at scale, and be used to breach network security around computer systems that vital infrastructure.
OpenAIs own research suggests that models such as GPT-4 are “general technologies” that will affect about 80% of the US workforce.
Layers of civilization and the pace of change
The American writer Stewart Brand has argued that a “healthy civilization” needs different systems or layers to move at different speeds:
The fast layers innovate; stabilize the slow layers. The whole combines learning with continuity.
In Brand’s ‘tempo-layers’ model, the lower layers change more slowly than the upper layers.
Technology is usually at the top, somewhere between fashion and commerce. Things like regulation, economic systems, guardrails, ethical frameworks and other aspects exist in the slower layers of governance, infrastructure and culture.
Right now, technology is accelerating much faster than our ability to understand and regulate it – and if we’re not careful, it will also cause changes in those lower layers that are moving too fast for security.
The American sociobiologist EO Wilson described the dangers of a mismatch in the different rates of change as follows:
The real problem of humanity is this: we have Paleolithic emotions, medieval institutions and divine technology.
Are there good reasons to maintain the current high pace?
Some argue that if the top AI labs slow down, other non-aligned players or countries like China will outperform them.
However, training complex AI systems is not easy. OpenAI is ahead of its US competitors (including Google and Meta), and developers in China and other countries are also lagging behind.
It is unlikely that “rogue groups” or governments will surpass GPT-4’s capabilities in the foreseeable future. Most AI talent, knowledge and computing infrastructure is concentrated in a handful of top labs.
Other critics of the Future of Life Institute letter says it rests on an exaggerated perception of current and future AI capabilities.
Whether or not you believe that AI will reach a state of general superintelligence, there is no denying that this technology will impact many facets of human society. Taking the time to acclimate our systems to the pace of change seems prudent.
Slowing down is wise
While there’s plenty of room for disagreement over specific details, I believe the Future of Life Institute’s letter points in a sensible direction: to take ownership of the pace of technological change.
Despite what we’ve seen of the disruption caused by social media, Silicon Valley still follows Facebook’s infamous motto: “move fast and break things.”
I think the wise course of action is to slow down and think about where we want to take these technologies so that our systems and we can adapt and engage in diverse, thoughtful conversations. It’s not about stopping, it’s about a sustainable pace of progress. We can choose to direct this technology, rather than assume it has a life of its own beyond our control.
After some thought, I added my name to the list of signatories to the open letter, which now numbers about 50,000 people, according to the Future of Life Institute. While a six-month moratorium won’t solve everything, it would be helpful: it sets the right intent to prioritize consideration of benefits and risks over uncritical, accelerated, profit-driven progress.
we definitely need more regulation on ai
— Sam Altman (@sama) March 13, 2023