Dr. Geoffrey Hinton, widely referred to as the “godfather” of AI, confirmed this in a interview with the New York Times that he quit his job at Google – to talk about the dangers of the technology he helped develop.
Hinton’s pioneering work in neural networks – for which he won the Turing Award in 2018 along with two other university professors — laid the groundwork for today’s advancements in generative AI.
The lifelong academic and computer scientist joined Google in 2013, after the tech giant spent $44 million to acquire a company founded by Hinton and two of his students, Ilya Sutskever (now chief scientist at OpenAI) and Alex Krishevsky. Their neural network system eventually led to the creation of ChatGPT and Google Bard.
But Hinton has partially regretted his life’s work, as he told the NYT. “I comfort myself with the normal excuse: if I hadn’t done it, someone else would have,” he said. He decided to leave Google so he could speak up about the dangers of AI and make sure his warnings didn’t affect the company itself.
In today’s NYT, Cade Metz suggests I left Google so I could criticize Google. Actually, I left so I could talk about the dangers of AI without thinking about how it will affect Google. Google acted very responsibly.
— Geoffrey Hinton (@geoffreyhinton) May 1, 2023
Join the TNW Conference in June and save 50% now
Take advantage of our 2for1 sale and bring your friend
According to the interview, Hinton was motivated by Microsoft’s integration of ChatGPT into its Bing search engine, which he fears will drive technology giants into potentially unstoppable competition. This can result in an abundance of fake photos, videos and texts to the extent that an average person will no longer be able to ‘tell what is true’.
But misinformation aside, Hinton also expressed concern about AI’s potential to cut jobs and even write and run its own code, as it’s seemingly able to become smarter than humans much sooner than expected.
The more companies improve artificial intelligence without monitoring, the more dangerous it becomes, Hinton believes. “Look at how it was five years ago and how it is now. Take the difference and propagate it forward. That’s creepy.”
The need to control the development of AI
Geoffry Hinton is not alone in expressing his fears about AI’s rapid and uncontrolled development.
In late March, more than 2,000 industry experts and executives across North America signed an open letter calling for a six-month pause in training systems more powerful than GPT-4, ChatGPT’s successor.
The signatories – including DeepMind researchers, computer scientist Yoshua Bengio and Elon Musk – stressed the need for regulatory policies and warned that “powerful AI systems should not be developed until we are confident that their effects will be positive and their risks manageable. will be. .”
Across the Atlantic, the growth of ChatGPT has spurred efforts by EU and national authorities to efficiently regulate AI development without hindering innovation.
Individual Member States are trying to oversee the operation of advanced models. For example, Spain, France and Italy have opened investigations into ChatGPT over data privacy concerns – with the latter becoming the first Western country to regulate its use following a temporary ban on the service.
The union as a whole is also moving closer to passage of the anticipated AI law – the world’s first AI law by a major regulatory body. Members of the European Parliament last week agreed to take the design to the next stagecalled a trilogue, in which legislators and member states will work out the final details of the bill.
According to Margrethe Vestagerthe EU’s head of technical regulation, the bloc is likely to agree on the law this year, and companies could already start thinking about its implications.
“With these groundbreaking rules, the EU is at the forefront of developing new global standards to ensure AI is trustworthy. By setting the standards, we can pave the way for ethical technology globally and ensure that the EU remains competitive along the way,” says Vestager. said when the bill was first announced.
Unless regulatory efforts in Europe and the world are accelerated, we risk a recurrence Oppenheimer’s approach of which Hinton is now sounding the alarm: “If you see something that is technically beautiful, then you go for it and only discuss what to do about it after you have had your technical success.”