Articles

4 shocking AI bias examples

George Denison
|October 24, 2023

Artificial intelligence (AI) can transform our lives for the better. But AI systems are only as good as the data fed into them. So, what if that data has its own biases?

Time and again, we’ve seen AI not only reflect biases from the data it’s built upon – but automate and magnify them. “If you’re not careful, you risk automating the exact same biases these programs are supposed to eliminate,” explains Kristian Lum, Lead Statistician at the Human Rights Data Analysis Group.

To illustrate this point, here are four shocking examples of AI bias – including what the AI was meant to do, how it ended up reflecting society’s worst prejudices, and why it happened… 

1. Amazon’s algorithm discriminated against women

Employment is one of the most common areas for bias to manifest in modern life. Despite progress over the past couple of decades, women are still underrepresented in roles relating to STEM (science, technology, engineering and mathematics). According to Deloitte, for example, women accounted for less than a quarter of technical roles in 2020.

That wasn’t helped by Amazon’s automated recruitment system, which was intended to evaluate applicants based on their suitability for various roles. The system learned how to judge if someone was suitable for a role by looking at resumes from previous candidates. Sadly, it became biased against women in the process.

Because women had previously been underrepresented in technical roles, the AI system thought that male applicants were consciously preferred. Consequently, it penalized resumes from female applicants with a lower rating. Despite making changes, it was no surprise that Amazon eventually ditched the initiative in 2017.

2. COMPAS race bias with reoffending rates

It’s not just gender bias that can be reflected by artificial intelligence. There are several AI bias examples relating to race too. 

The Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) predicted the likelihood that US criminals would re-offend. In 2016, ProPublica investigated COMPAS and found that the system was far more likely to say black defendants were at risk of reoffending than their white counterparts.

While it correctly predicted reoffending at a rate of around 60% for both black and white defendants, COMPAS:

  • Misclassified almost twice as many black defendants (45%) as higher risk compared to white defendants (23%)
  • Mistakenly labeled more white defendants as low risk, who then went on to reoffend – 48% white defendants compared to 28% black defendants
  • Classified black defendants as higher risk when all other variables (such as prior crimes, age, and gender) were controlled – 77% more likely than white defendants.

3. US healthcare algorithm underestimated black patients’ needs

AI can also reflect racial prejudices in healthcare, which was the case for an algorithm used by US hospitals. Used for over 200 million people, the algorithm was designed to predict which patients needed extra medical care. It analyzed their healthcare cost history – assuming that cost indicates a person’s healthcare needs.

However, that assumption didn’t account for the different ways in which black and white patients pay for healthcare. A 2019 paper in Science explains how black patients are more likely to pay for active interventions like emergency hospital visits – despite showing signs of uncontrolled illnesses.

As a result, black patients:

  • Received lower risk scores than their white counterparts
  • Were put on par with healthier white people in terms of costs
  • Did not qualify for extra care as much as white patients with the same needs

4. ChatBot Tay shared discriminatory tweets

While Twitter has made recent headlines due to Elon Musk’s acquisition, Microsoft’s attempt to showcase a chatbot on the platform was even more controversial. 

In 2016, they launched Tay, intended to learn from its casual, playful conversations with other users of the app.

Initially, Microsoft noted how “relevant public data” would be “modeled, cleaned and filtered”. However, within 24 hours, the chatbot was sharing tweets that were racist, transphobic and antisemitic. It learned discriminatory behavior from its interactions with users, many of whom were feeding it inflammatory messages.

How to avoid bias in AI

When it comes to bias in AI, examples all have one thing in common – data. AI learns bias from the data it’s trained on, which means researchers have to be really careful about how they gather and treat that data.

Learn how to avoid bias with ethical data collection in The Quick Guide to AI Ethics for Researchers. It features 6 key ethical challenges that every AI researcher must be aware of, and 4 vital tips that will help you train AI ethically and responsibly. Download your copy now.