Thursday, July 3, 2025
  • Login
CEO North America
  • Home
  • News
    • Business
    • Entrepreneur
    • Industry
    • Innovation
    • Management & Leadership
  • CEO Interviews
  • Opinion
  • Technology
  • Environment
  • CEO Life
    • Art & Culture
    • Food
    • Health
    • Travel
No Result
View All Result
  • Home
  • News
    • Business
    • Entrepreneur
    • Industry
    • Innovation
    • Management & Leadership
  • CEO Interviews
  • Opinion
  • Technology
  • Environment
  • CEO Life
    • Art & Culture
    • Food
    • Health
    • Travel
No Result
View All Result
CEO North America
No Result
View All Result

CEO North America > Business > Innovation > Understanding algorithmic bias and building trust in AI

Understanding algorithmic bias and building trust in AI

in Innovation
- Understanding algorithmic bias and building trust in AI
Share on LinkedinShare on WhatsApp

Artificial intelligence promises to create a better and more equitable world. Left unchecked, however, it could also perpetuate historical inequities. Fortunately, businesses can take measures to mitigate this risk so they can use AI systems—and decision-making software in general—with confidence. PwC explains.

Artificial intelligence promises a dazzling value beyond simple automation. Objective, data-driven and informed decision-making has always been the lure of AI. While that promise is within reach, businesses should proactively consider and mitigate potential risks, including confirming that their software doesn’t result in bias against groups of people.

Enabling our AI systems to be trustworthy has become increasingly urgent. Eighty-six percent of C-suite executives (including 200 CEOs) surveyed for PwC’s AI Predictions 2021 report agree: AI will become a mainstream technology in their companies this year. And it’s no longer restricted to the back office. It’s in every area of the business. A quarter of the executives surveyed already report widespread adoption of processes fully enabled by AI. Another third are rolling out more limited use cases. The top three goals for these initiatives include not just the traditional benefits of automation—efficiency and productivity—but also innovation and revenue growth.

AI is spreading ever deeper into business (and the world at large), influencing life-critical decisions such as who gets a job, who gets a loan and what kind of medical treatment a patient receives. That makes the potential risk of biased AI even more significant. The path to managing and mitigating this risk begins with understanding how such bias can occur—and how it can be so difficult to detect.

Why AI becomes biased

The definition of AI bias is straightforward: AI that makes decisions that are systematically unfair to certain groups of people. Several studies have identified the potential for these biases to cause real harm.

A study published by the US Department of Commerce, for example, found that facial recognition AI misidentifies people of color more often than white people. This finding raises concerns that, if used by law enforcement, facial recognition could increase the risk of the police unjustly apprehending people of color. In fact, wrongful arrests due to a mistaken match by facial recognition software have already occurred.

Another study, this one from Georgia Tech, found that self-driving cars guided by AI performed worse at detecting people with dark skin, which could put the lives of dark-skinned pedestrians at risk.

In financial services, several mortgage algorithms have systematically charged Black and Latino borrowers higher interest rates, according to a UC Berkeley study.

Natural language processing (NLP), the branch of AI that helps computers understand and interpret human language, has been found to demonstrate racial, gender and disability bias. Inherent biases such as a low sentiment attached to certain races, higher-paying professions associated with men and negative labelling of disabilities then get propagated into a wide variety of applications, from language translators to resume filtering.

Researchers from the University of Melbourne, for example, published a report demonstrating how algorithms can amplify human gender biases against women. Researchers created an experimental hiring algorithm that mimicked the gender biases of human recruiters, showing how AI models can encode and propagate at scale any biases already existing in our world.

Yet another study by researchers at Stanford found that automated speech recognition systems demonstrate large racial disparities, with voice assistants misidentifying 35% of words from Black users while only misidentifying 19% of words from white users. This makes it difficult for Black users to leverage applications such as virtual assistants, closed captioning, hands-free computing and speech to text, applications that others take for granted.

Read the full article here.

Tags: AIAlgorithmic bias

Related Posts

Amazon CEO’s annual letter expresses excitement about AI
Innovation

Amazon follows Google by making investments in clean energy

Liftoff! NASA’s Europa Clipper Sails Toward Ocean Moon of Jupiter
Environment

Liftoff! NASA’s Europa Clipper Sails Toward Ocean Moon of Jupiter

Nobel Prize in economics explains what causes different levels of global prosperity
Innovation

Nobel Prize in economics explains what causes different levels of global prosperity

Pollution-sucking vacuum plant begins operations
Environment

Pollution-sucking vacuum plant begins operations

Apple debuts more powerful chip in iPad Pros
Innovation

Apple debuts more powerful chip in iPad Pros

More newspapers file suit against OpenAI and Microsoft
Business

More newspapers file suit against OpenAI and Microsoft

Tesla announces new models, sending shares up 9%
Business

Tesla announces new models, sending shares up 9%

Want a job in AI? Move to these places.
Innovation

Want a job in AI? Move to these places.

State-by-state AI laws causing confusion for businesses
Innovation

State-by-state AI laws causing confusion for businesses

- Apple implements news anti-theft feature
Innovation

Apple implements news anti-theft feature

No Result
View All Result

Recent Posts

  • Trump lifts chip restrictions on China
  • Constellation CEO attributes beer sales decline to immigration crackdown
  • Bipartisan support saved our public lands from being sold off
  • Nasdaq’s New Survey Reveals: Next-Gen Investors Embrace Advanced Technology ETFs
  • Blackstone’s Jon Gray on Strategic Discipline, AI, and Entrepreneurial Leadership

Archives

Categories

  • Art & Culture
  • Business
  • CEO Interviews
  • CEO Life
  • Editor´s Choice
  • Entrepreneur
  • Environment
  • Food
  • Health
  • Highlights
  • Industry
  • Innovation
  • Issues
  • Management & Leadership
  • News
  • Opinion
  • PrimeZone
  • Printed Version
  • Technology
  • Travel
  • Uncategorized

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org

  • CONTACT
  • GENERAL ENQUIRIES
  • ADVERTISING
  • MEDIA KIT
  • DIRECTORY
  • TERMS AND CONDITIONS

Advertising –
advertising@ceo-na.com

110 Wall St.,
3rd Floor
New York, NY.
10005
USA
+1 212 432 5800

Avenida Chapultepec 480,
Floor 11
Mexico City
06700
MEXICO

  • News
  • CEO Interviews
  • Opinion
  • Technology
  • Environment
  • CEO Life

  • CONTACT
  • GENERAL ENQUIRIES
  • ADVERTISING
  • MEDIA KIT
  • DIRECTORY
  • TERMS AND CONDITIONS

Advertising –
advertising@ceo-na.com

110 Wall St.,
3rd Floor
New York, NY.
10005
USA
+1 212 432 5800

Avenida Chapultepec 480,
Floor 11
Mexico City
06700
MEXICO

CEO North America © 2024 - Sitemap

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Home
  • News
    • Business
    • Entrepreneur
    • Industry
    • Innovation
    • Management & Leadership
  • CEO Interviews
  • Opinion
  • Technology
  • Environment
  • CEO Life
    • Art & Culture
    • Food
    • Health
    • Travel

© 2025 JNews - Premium WordPress news & magazine theme by Jegtheme.