Why AI governance is critical to building reliable, explainable AI
Content provided by IBM and TNW
The dangers of robots evolving beyond our control have been well documented in sci-fi movies and TV – Hair, Black Mirror, Surrogates, I robotshould we move on?
While this may seem like a distant fantasy, FICO’s 2021 State of responsible AI report found that 65% of companies are actually unable to explain how specific AI model decisions or predictions are made.
While AI is undeniably helping to move our businesses and society forward at lightning speed, we have also seen the negative consequences that a lack of oversight can have.
Study after study has shown that AI-driven decision-making could potentially lead to biased outcomes, from racial profiling in predictive police algorithms until sexist hiring decisions.
As governments and businesses rapidly adopt AI tools, AI ethics will affect many aspects of society. But according to the FICO report, 78% of companies said they were “ill equipped to ensure the ethical implications of using new AI systems,” and only 38% had steps for detecting and mitigating data bias.
As usual with disruptive technologies, the speed of AI development has quickly surpassed the speed of regulation. But in the race to adopt AI, many companies are beginning to realize that regulators are now catching up. A number of lawsuits have already been filed against companies for developing or simply using biased AI algorithms.
Companies are feeling the heat of AI regulation
This year, the EU unveiled the AI Liability Directive, a bill that will make it easier to sue companies for damages caused, part of a broader effort to prevent companies from developing and deploying harmful AI. The bill adds an extra layer to the proposed AI law, which requires extra controls on ‘risky’ applications of AI, such as in the use of the police, recruitment or healthcare. The bill, unveiled earlier this month, is likely to become law within a few years.
While some worry that the AI Liability Directive will curb innovation, the goal is to hold AI companies accountable and require them to explain how their AI systems are built and trained. Tech companies that do not follow the rules risk group actions across Europe.
While the US has been slower to adopt protective policies, the White House has also released the blueprint for a… AI Bill of Rights earlier this month, outlining how consumers should be protected from harmful AI:
- Artificial intelligence must be safe and effective
- Algorithms should not discriminate
- Data privacy must be protected
- Consumers need to know when AI is being used
- Consumers should be able to refrain from using it and talk to a human instead
But there is a catch. “It is important to realize that the AI Bill of Rights is not a binding legislation,” writes Sigal Samuel, a senior reporter at Vox. “It’s a set of recommendations that government agencies and tech companies can voluntarily comply with – or not. That’s because it was created by the Office of Science and Technology Policy, a White House agency that advises the president but cannot make actual laws.”
With or without strict AI regulations, a number of US-based companies and institutions have already faced lawsuits for unethical AI practices.
And it’s not just legal fees that businesses need to worry about. Public trust in AI is declining. A study by Pew Research Center, 602 tech innovators, developers, business and policy leaders asked, “By 2030, will most AI systems used by organizations of all kinds adopt ethical principles primarily focused on the common good?” 68% didn’t think so.
Whether or not a company loses a legal battle over allegations of biased AI, the impact that incidents like this can have on a company’s reputation can be just as damaging.
While this casts a gloomy light on the future of AI, all is not lost. IBM’s Global AI Adoption Index Found that 85% of IT professionals agree that consumers are more likely to choose a company that is transparent about how its AI models are built, managed and used.
Companies that take steps to adopt ethical AI practices can reap the benefits. So why are so many slow to take the plunge?
The problem may be that while many companies want to adopt ethical AI practices, many don’t know where to start. We spoke with Priya Krishnan, who leads the data and AI product management team at IBM, to find out how building a strong AI governance model can help.
AI governance
According to IBM“AI governance is the process of defining policies and establishing accountability to guide the creation and implementation of AI systems in an organization.”
“Before the administration, people went straight from experimentation to manufacturing in AI,” Krishnan says. “But then they realized, ‘Well, wait a minute, this is not the decision I expect the system to make. Why is this happening?’ They couldn’t explain why the AI made certain decisions.”
AI governance is really about making sure companies are aware of what their algorithms are doing — and have the documentation to back it up. This means tracking and recording how an algorithm is trained, the parameters used in the training, and any metrics used during the testing phases.
If this is in place, companies can easily understand what goes on beneath the surface of their AI systems and easily retrieve documentation in the event of an audit. Krishnan pointed out that this transparency also helps to break down knowledge silos within a company.
“If a data scientist leaves the company and you don’t have the information from the past hooked up to these hook-in processes, it’s very difficult to manage. Anyone looking into the system does not know what happened. So this process of documentation just provides basic understanding of what’s going on and makes it easier to explain to other departments within the organization (such as risk managers).”
While regulations are still under development, adopting AI governance is now an important step towards what Krishnan calls “future-proofing”:
“[Regulations are] comes fast and strong. Now people are producing manual documents afterwards for auditing purposes,” she says. Instead, it can help companies prepare for future regulations now by starting to document now.
The innovation versus governance debate
Companies may face increasing competition to innovate quickly and be first to market. So won’t taking the time for AI governance slow this process down and stifle innovation?
Krishnan argues that AI governance does not stop innovation any more than brakes stop someone from driving: “There is traction control in a car, there are brakes in a car. All of these are designed to help you go faster and safer. That’s how I would think about AI governance. It’s really about getting the most value out of your AI while making sure there are crash barriers to help you innovate.”
And this aligns with the main reason for introducing AI governance: it just makes business sense. Nobody wants defective products and services. Establishing clear and transparent documentation standards, checkpoints and internal review processes to reduce bias can ultimately help companies create better products and improve speed to market.
Still don’t know where to start?
The tech giant launched this month IBM AI governance, a one-stop solution for businesses struggling to better understand what goes on beneath the surface of these systems. The tool uses automated software to collaborate with companies’ data science platform to develop a consistent and transparent algorithmic model management process, while tracking development time, metadata, post-deployment monitoring, and custom workflows. This helps take the pressure off data science teams, allowing them to focus on other tasks. The tool also helps business leaders to always have insight into their models and supports the right documentation in case of audit.
This is especially a good option for companies that use AI across the organization and don’t know what to focus on first.
“Before you buy a car, you want to try it out. At IBM, we’ve invested in a team of engineers who help our customers test AI governance for a test drive to get them started. In just weeks, the IBM Client Engineering team can help teams innovate with the latest AI Governance technology and approaches using their business models and data. It’s an investment in our customers to co-create quickly with IBM technology so they can get up and running quickly,” said Krishnan.
Contents