4 shocking AI bias examples
Artificial intelligence (AI) can transform our lives for the better. But AI systems are only as good as the data fed into them. So, what if that data has its own biases?
Time and again, we’ve seen AI not only reflect biases from the data it’s built upon – but automate and magnify them. “If you’re not careful, you risk automating the exact same biases these programs are supposed to eliminate,” explains Kristian Lum, Lead Statistician at the Human Rights Data Analysis Group.
To illustrate this point, here are four shocking examples of AI bias – including what the AI was meant to do, how it ended up reflecting society’s worst prejudices, and why it happened…
Employment is one of the most common areas for bias to manifest in modern life. Despite progress over the past couple of decades, women are still underrepresented in roles relating to STEM (science, technology, engineering and mathematics). According to Deloitte, for example, women accounted for less than a quarter of technical roles in 2020.
That wasn’t helped by Amazon’s automated recruitment system, which was intended to evaluate applicants based on their suitability for various roles. The system learned how to judge if someone was suitable for a role by looking at resumes from previous candidates. Sadly, it became biased against women in the process.
Because women had previously been underrepresented in technical roles, the AI system thought that male applicants were consciously preferred. Consequently, it penalized resumes from female applicants with a lower rating. Despite making changes, it was no surprise that Amazon eventually ditched the initiative in 2017.
It’s not just gender bias that can be reflected by artificial intelligence. There are several AI bias examples relating to race too.
The Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) predicted the likelihood that US criminals would re-offend. In 2016, ProPublica investigated COMPAS and found that the system was far more likely to say black defendants were at risk of reoffending than their white counterparts.
While it correctly predicted reoffending at a rate of around 60% for both black and white defendants, COMPAS:
AI can also reflect racial prejudices in healthcare, which was the case for an algorithm used by US hospitals. Used for over 200 million people, the algorithm was designed to predict which patients needed extra medical care. It analyzed their healthcare cost history – assuming that cost indicates a person’s healthcare needs.
However, that assumption didn’t account for the different ways in which black and white patients pay for healthcare. A 2019 paper in Science explains how black patients are more likely to pay for active interventions like emergency hospital visits – despite showing signs of uncontrolled illnesses.
As a result, black patients:
While Twitter has made recent headlines due to Elon Musk’s acquisition, Microsoft’s attempt to showcase a chatbot on the platform was even more controversial.
In 2016, they launched Tay, intended to learn from its casual, playful conversations with other users of the app.
Initially, Microsoft noted how “relevant public data” would be “modeled, cleaned and filtered”. However, within 24 hours, the chatbot was sharing tweets that were racist, transphobic and antisemitic. It learned discriminatory behavior from its interactions with users, many of whom were feeding it inflammatory messages.
When it comes to bias in AI, examples all have one thing in common – data. AI learns bias from the data it’s trained on, which means researchers have to be really careful about how they gather and treat that data.
That’s where Prolific can help. We can support AI researchers by placing emphasis on solid data collection and best practices to avoid bias. As well as connecting researchers to high-quality data, our expert team can advise on how to avoid bias with a system of quality checks and insightful metrics.
The result? Your data is as clean and fair as it can be.
Log in or sign up to recruit thousands of participants for your tasks in minutes.
Today Prolific is turning 5 years old – Happy Birthday to us! 🥳 It's been a remarkable journey so far. 3000+ researchers from science and industry have used Prolific last year, we have 45,000 quarterly active participants, and we've seen 200% year-on-year growth. But we're only getting started. In this post, I'll tell you a little bit about our journey, give credit where it's due!, and tell you about our exciting plans for the future.
Fresh out of YC's Summer 2019 batch, we want to share some of our most interesting learnings. If you're a startup founder or enthusiast and want to learn about product-market fit, growth experimentation and culture setting, you're in the right place!