AI ‘godfather’ Geoffrey Hinton, together with Google, draws the pin about the dangers of artificial intelligence
Turing Award winner and AI pioneer Geoffrey Hinton has resigned from his position at Google to speak more openly about the dangers of artificial intelligence, raising questions about whether tech giants have silenced the people most qualified to inform the public about the effects of emerging technology.
In 2012, Hinton and two of his students, Ilya Sutskever and Alex Krishevsky, built a convolutional neural network that revolutionized computer vision.
Later that year, Hinton, Sutskever and Krishevsky turned their technology into a company, DNN Research, that Google bought at auction for $66 million (US$44 million).
Now, after more than a decade at Google, the 75-year-old who has been dubbed “the godfather of AI” is leaving the tech giant so he can speak freely without Google executives breathing down his neck.
“I left so I could talk about the dangers of AI without thinking about the implications for Google,” he said tweeted on Monday night to clarify the implications made in a New York Times article that first broke the news of Hinton’s departure.

Jeffrey Hinton

Geoffrey Hinton Al
Hinton spoke to the New York Times about how the latest AI bonus, caused by ChatGPT’s sudden and immense popularity, worries him.
The issues that worry Hinton range from the internet being flooded with AI-generated images, videos and text to the point that people don’t know “what’s true” anymore, to the wide-scale impact on jobs and even the threat of AI getting unexpected properties as it starts writing and executing its own code.
“The idea that this stuff could actually get smarter than humans — a few people believed that,” he said the New York Times.
“But most people thought it was far away. And I thought it was far away. I thought it was 30 to 50 years away or even longer. Of course I don’t think so anymore.”
Hinton was a notable absence from the open letter calling for a six-month freeze on AI development to give regulators time to catch up – a letter with many signatories adhering to the strange long-term ideology.
But now that his time at Google is officially over, Hinton is publicly warning that the arms race between tech giants may pose unprecedented risks to humanity.
“I don’t think they should scale this up further until they understand if they can get it under control,” he said.
The arrival of ChatGPT sent a shock wave through Google, whose executives saw the AI-powered chatbot as a direct threat to its lucrative search product.
When Microsoft announced it would happen add AI to BingGoogle followed quickly much to the chagrin of shareholders.
The conflict between commercial interests and ethical AI development is long overdue.
Back in 2020, Google fired prominent AI ethics researcher Timnit Gebru after co-authoring an article describing four key risks associated with developing natural language models such as those from Bing, ChatGPT, and Google Bard.
Those concerns were about environmental costs, the embedding of bias in AI, the decision to build systems that exist purely to serve business needs, and the potential for mass disinformation.
Gebru subsequently founded an AI ethics institute and co-authored one critical response to the AI pause letter alongside AI ethicist Margaret Mitchell who Google was fired in 2021.
In response to the news about Hinton, Mitchell issued a serious warning that “the world’s most qualified researchers cannot say what the future holds for AI because they are implicitly/culturally or even directly censored by short-term gain”.