4 questions to ask when evaluating AI prototypes for bias • australiabusinessblog.com

There it is true Progress has been made in data protection in the US thanks to the passing of several laws, such as the California Consumer Privacy Act (CCPA), and non-binding documents, such as the Blueprint for an AI Bill of Rights. Yet there are currently no standard rules that dictate how tech companies should reduce AI bias and discrimination.
As a result, many companies are lagging behind in developing ethical tools that put privacy at the center. Almost 80% of US data scientists are male and 66% are white, indicating an inherent lack of diversity and demographic representation in the development of automated decision-making tools, often leading to biased data results.
Significant improvements in design review processes are needed to ensure technology companies consider all people when creating and customizing their products. Otherwise, organizations risk losing customers to the competition, tarnishing their reputation and risking serious litigation. According to IBM, about 85% of IT professionals believe consumers choose companies that are transparent about how their AI algorithms are created, managed and used. We can expect this number to increase as more users continue to oppose harmful and biased technology.
What should companies look for when analyzing their prototypes? Here are four questions development teams should be asking themselves:
Have we ruled out all types of bias in our prototype?
Technology has the ability to revolutionize society as we know it, but it will eventually fail if it doesn’t benefit everyone in the same way.
To build effective, bias-free technology, AI teams need to create a list of questions to ask during the review process that can help them identify potential problems in their models.
There are many methods that AI teams can use to assess their models, but before doing so, it is critical to evaluate the end goal and whether there are any groups that may be disproportionately affected by the results of the use of AI.
For example, AI teams should be aware that the use of facial recognition technologies can unintentionally discriminate against people of color – something that is far too common in AI algorithms. Research conducted by the American Civil Liberties Union in 2018 showed that Amazon’s facial recognition inaccurately matched 28 members of the US Congress with mugshots. A whopping 40% of incorrect matches were people of color, despite only making up 20% of Congress.
By asking challenging questions, AI teams can find new ways to improve their models and aim to avoid these scenarios. For example, a close examination can help them determine whether they need to look at more data or whether they need a third party, such as a privacy expert, to review their product.
Plot4AI is a great resource for people who want to get started.