Elon Musk, Steve Wozniak and more than 1,300 academics, tech and business people have signed an open letter from the Future of Life Institute (FLI) calling for a six-month halt to the “out of control” AI development that , they say, poses “serious risks to society and humanity.”
That development has accelerated at a breakneck pace since the release of GPT-3 last November – the natural language generative AI model already being used to answer interview questions, developing malware, write application code, revolutionize surf, create award-winning artamplify productivity suites from Microsoft And Googleand more.
A global race to embrace and improve the technology – and its new successor, the ‘multimodal’ GPT-4 capable of analyzing images the use of techniques that mimic significantly improved deductive reasoning — has so quickly fueled uncontrolled investment in the technology, the FLI letter warns, that adoption of “human-competing” AI is now progressing without considering the long-term implications .
Those implications, according to the letter, include the potential to “flood our information channels with propaganda and untruth”; automation of “all jobs”; “loss of control over our civilization”; and development of “non-human minds that could eventually outnumber, outsmart, obsolete and replace us.”
To prevent such AI-driven destruction, the letter calls for a six-month “public and verifiable” hiatus for the development of AI models more powerful than GPT-4 — or, in the absence of a quick pause , a government-enforced moratorium on AI development.
“AI labs and independent experts should use this pause to jointly develop and implement a set of shared security protocols for advanced AI design and development [to] ensure that systems that adhere to it are beyond any doubt secure,” the letter argues.
The letter doesn’t call for a complete pause in AI development, FLI notes, but a “step back from the perilous race toward increasingly unpredictable black-box models with emerging capabilities.”
“AI research and development must be refocused on making today’s powerful, sophisticated systems more accurate, secure, interpretable, transparent, robust, tuned, reliable and loyal.”
Tech giants virtually absent
The letter comes less than a year after Google AI researcher Blake Lemoine placed on administrative leave for claiming that of Google LaMDA The AI engine had become so advanced that it was sensitive — a claim that Google’s ethicists and technologists flatly rejected.
Lemoine is not on the list of signatories to FLI’s open letter, but many share responsibility for the lightning pace of AI development, with Musk – one of the original co-founders from GPT-3 creator OpenAI – recently reported pitched AI researchers on developing an alternative non-woke platform with fewer restrictions on creating objectionable content.
The list of signatories — which has been paused to allow vetting processes to catch up amid high demand — includes executives at content-based companies such as Pinterest and Getty Images, as well as AI and robotics think tanks including the Center for Humane Technology, Cambridge Center for the Study of Existential Risk, Edmond and Lily Safra Center for Ethics, UC Berkeley Center for Human-Compatible AI, Unanimous AI, and more.
Australian signatories include Andrew Francis, Professor of Mathematics at Western Sydney University; Professors Andrew Robinson and David Balding from the University of Melbourne and Colin G Hales, neuroscience research fellow; UNSW scientia professor Robert Brooks; University of Queensland Honorary Professor Joachim Diederich; University of Sydney law professor Kimberlee Weatherall; and others.
Tech giants like Meta, which one recently closed its Responsible Innovation team after a year, are virtually absent from the list — which includes no Apple, Twitter, or Instagram employees, just one Meta employee, three Google researchers and software engineers, and three Google AI employees— daughter DeepMind.
The letter isn’t the first time FLI has warned of the risks of AI, with previous open letters warning of deadly autonomous weaponsthe importance of guiding AI principlesand the need to prioritize research on “robust and low-cost” AI.