Inside Google DeepMind’s approach to AI security
This article features an interview with Lila Ibrahim, COO of Google DeepMind. Ibrahim will speak on TNW conference, which takes place on 15 and 16 June in Amsterdam. If you’d like to join the event (and say hello to our editors!), we’ve got something special for our loyal readers. Use the promotional code READ-TNW-25 and receive a 25% discount on your business pass for TNW Conference. See you in Amsterdam!
AI security has become a mainstream concern. The rapid development of tools such as ChatGPT and deepfakes has led to fears of job losses, disinformation and even destruction. Last month, a warning that artificial intelligence posed a “risk of extinction” drew headlines around the world.
The warning came in a statement signed by more than 350 industry heavyweights. Among them was Lila Ibrahim, the Chief Operating Officer of Google DeepMind. As the leader of the pioneering AI lab, Ibrahim has a front-row view of the threats – and opportunities.
DeepMind has delivered some of the most notable breakthroughs in the field, from overcoming complex games to revealing the structure of the protein universe.
The company’s ultimate mission is to create artificial general intelligence, a vague concept that broadly refers to machines with human-level cognitive abilities. It’s a visionary ambition that must remain grounded in reality – which is where Ibrahim comes in.
In 2018, Ibrahim was appointed as DeepMind’s first-ever COO. Her role oversees operations and growth, with a strong focus on building AI responsibly.
“New and emerging risks – such as bias, security and inequality – need to be taken extremely seriously,” Ibrahim told TNW via email. “Similarly, we want to make sure we do what we can to maximize the beneficial outcomes.”

Much of Ibrahim’s time is spent ensuring that the company’s work has a positive outcome for society. Ibrahim emphasized four branches of this strategy.
1. The scientific method
To uncover the building blocks of advanced AI, DeepMind follows the scientific method.
“This means constructing and testing hypotheses, stress testing our approach and results through peer review,” says Ibrahim. “We believe the scientific approach is the right one for AI because the roadmap for building advanced intelligence is still unclear.”
2. Multidisciplinary teams
DeepMind uses a variety of systems and processes to guide its research into the real world. An example is an internal review committee.
The multidisciplinary team consists of machine learning researchers, ethicists, security experts, engineers, security enthusiasts and policy professionals. At regular meetings, they discuss ways to extend the benefits of the technology, changes in research areas and projects that need further external consultation.
“Having an interdisciplinary team with a unique set of perspectives is a critical part of building a safe, ethical and inclusive AI-powered future that benefits us all,” said Ibrahim.
To guide the company’s AI development, DeepMind has established a set of clear, shared principles. The companies Operating principlesfor example, define the lab’s commitment to mitigating risk while specifying what it refuses to pursue, such as autonomous weapons.
“They also codify our goal to prioritize widespread benefits,” says Ibrahim.
4. Consulting external experts
One of Ibrahim’s main concerns is representation. AI often has reinforced prejudicesparticularly against marginalized groups, who are often underrepresented in both the training data and the teams building the systems.
To mitigate these risks, DeepMind collaborates with external experts in areas such as bias, persuasiveness, biosecurity and the responsible use of models. The company also works with a wide range of communities to understand the impact of technology on them.
“This feedback allows us to refine and retrain our models to suit a wider audience,” says Ibrahim.
The involvement has already produced powerful results.
The business case for AI security
In 2021, DeepMind solved one of biology’s greatest challenges: the problem of protein folding.
Using an AI program called AlphaFold, the company predicted the 3D structures of nearly every known protein in the universe – about 200 million in all. Scientists believe the work could dramatically speed up drug development.
“AlphaFold is the unique and momentous advancement in life sciences that demonstrates the power of AI,” said Eric Topol, director of the Scripps Research Translational Institute. “Determining the 3D structure of a protein used to take many months or years, now it takes seconds.”
AlphaFold’s success has been led by a diverse range of outside experts. iIn the early stages of the work, DeepMind explored a series of big questions. How could AlphaFold accelerate biological research and applications? What could be the unintended consequences? And how can progress be shared responsibly?
In search of answers, DeepMind sought input from more than 30 leaders in areas ranging from biosecurity to human rights. Their feedback guided DeepMind’s strategy for AlphaFold.
In one example, DeepMind had initially considered omitting predictions for which AlphaFold had low confidence or high predictive uncertainty. But the outside experts recommended keeping these predictions in the release.
DeepMind followed their advice. As a result, users of AlphaFold now know that if the system has little confidence in a predicted structure, that is a good indication of an inherently disordered protein.
Scientists all over the world are reaping the benefits. DeepMind announced this in February the protein database is now used by more than 1 million researchers. Their work addresses major global challenges, from development of malaria vaccines Unpleasant fighting plastic pollution.
“Now you can look up a 3D structure of a protein almost as easily as a Google keyword search — it’s science at digital speed,” says Ibrahim.
Responsible AI also requires a diverse talent pool. DeepMind is cooperating to expand the pipeline academia, Community groupsAnd charities to support underrepresented communities.
The motivations are not just altruistic. Closing the skills gap will bring more talent to DeepMind and the wider technology sector.
As AlphaFold has shown, responsible AI can also accelerate scientific progress. And amid growing public concern and regulatory pressure, the business case is only getting stronger.
Use the promo code to hear more from Lila Ibrahim READ-TNW-25 and receive a 25% discount on your business pass for TNW Conference.
Contents