Artificial intelligence promises to create a better and more equitable world. Left unchecked, however, it could also perpetuate historical inequities. Fortunately, businesses can take measures to mitigate this risk so they can use AI systems—and decision-making software in general—with confidence. PwC explains.
Artificial intelligence promises a dazzling value beyond simple automation. Objective, data-driven and informed decision-making has always been the lure of AI. While that promise is within reach, businesses should proactively consider and mitigate potential risks, including confirming that their software doesn’t result in bias against groups of people.
Enabling our AI systems to be trustworthy has become increasingly urgent. Eighty-six percent of C-suite executives (including 200 CEOs) surveyed for PwC’s AI Predictions 2021 report agree: AI will become a mainstream technology in their companies this year. And it’s no longer restricted to the back office. It’s in every area of the business. A quarter of the executives surveyed already report widespread adoption of processes fully enabled by AI. Another third are rolling out more limited use cases. The top three goals for these initiatives include not just the traditional benefits of automation—efficiency and productivity—but also innovation and revenue growth.
AI is spreading ever deeper into business (and the world at large), influencing life-critical decisions such as who gets a job, who gets a loan and what kind of medical treatment a patient receives. That makes the potential risk of biased AI even more significant. The path to managing and mitigating this risk begins with understanding how such bias can occur—and how it can be so difficult to detect.
Why AI becomes biased
The definition of AI bias is straightforward: AI that makes decisions that are systematically unfair to certain groups of people. Several studies have identified the potential for these biases to cause real harm.
A study published by the US Department of Commerce, for example, found that facial recognition AI misidentifies people of color more often than white people. This finding raises concerns that, if used by law enforcement, facial recognition could increase the risk of the police unjustly apprehending people of color. In fact, wrongful arrests due to a mistaken match by facial recognition software have already occurred.
Another study, this one from Georgia Tech, found that self-driving cars guided by AI performed worse at detecting people with dark skin, which could put the lives of dark-skinned pedestrians at risk.
In financial services, several mortgage algorithms have systematically charged Black and Latino borrowers higher interest rates, according to a UC Berkeley study.
Natural language processing (NLP), the branch of AI that helps computers understand and interpret human language, has been found to demonstrate racial, gender and disability bias. Inherent biases such as a low sentiment attached to certain races, higher-paying professions associated with men and negative labelling of disabilities then get propagated into a wide variety of applications, from language translators to resume filtering.
Researchers from the University of Melbourne, for example, published a report demonstrating how algorithms can amplify human gender biases against women. Researchers created an experimental hiring algorithm that mimicked the gender biases of human recruiters, showing how AI models can encode and propagate at scale any biases already existing in our world.
Yet another study by researchers at Stanford found that automated speech recognition systems demonstrate large racial disparities, with voice assistants misidentifying 35% of words from Black users while only misidentifying 19% of words from white users. This makes it difficult for Black users to leverage applications such as virtual assistants, closed captioning, hands-free computing and speech to text, applications that others take for granted.
Read the full article here.