Inside Google DeepMind’s approach to AI security

This article features an interview with Lila Ibrahim, COO of Google DeepMind. Ibrahim will speak on TNW conference, which takes place on 15 and 16 June in Amsterdam. If you’d like to join the event (and say hello to our editors!), we’ve got something special for our loyal readers. Use the promotional code READ-TNW-25 and receive a 25% discount on your business pass for TNW Conference. See you in Amsterdam!

AI security has become a mainstream concern. The rapid development of tools such as ChatGPT and deepfakes has led to fears of job losses, disinformation and even destruction. Last month, a warning that artificial intelligence posed a “risk of extinction” drew headlines around the world.

The warning came in a statement signed by more than 350 industry heavyweights. Among them was Lila Ibrahim, the Chief Operating Officer of Google DeepMind. As the leader of the pioneering AI lab, Ibrahim has a front-row view of the threats – and opportunities.

DeepMind has delivered some of the most notable breakthroughs in the field, from overcoming complex games to revealing the structure of the protein universe.

The company’s ultimate mission is to create artificial general intelligence, a vague concept that broadly refers to machines with human-level cognitive abilities. It’s a visionary ambition that must remain grounded in reality – which is where Ibrahim comes in.

In 2018, Ibrahim was appointed as DeepMind’s first-ever COO. Her role oversees operations and growth, with a strong focus on building AI responsibly.

“New and emerging risks – such as bias, security and inequality – need to be taken extremely seriously,” Ibrahim told TNW via email. “Similarly, we want to make sure we do what we can to maximize the beneficial outcomes.”

Lil Ibrahim