From bias to balance: breaking gender stereotypes in AI
This image was created by an AI image generation tool, DALL-E
With recent advancements in artificial intelligence tools and solutions available to all, it’s more important than ever to understand the limitations as well as the benefits of AI. One of the most pressing issues that needs action is bias.
Can AI inherit gender bias from humans? This 2016 study showed that blindly applying machine learning could mean amplifying biases present in data. Bias runs deeper than many of us think. So, sadly, the answer looks like yes.
Now, the question is: how can we address these damaging stereotypes and build artificial intelligence that upholds a fairer world?
How can AI have gender bias?
When we refer to artificial intelligence today, it’s not technically ‘intelligent’ or sentient in the way that many imagine; it’s more like AI is really good at understanding what responses we’re looking for. It learns how to do this via machine learning, which uses algorithms and statistical models to draw conclusions from the data it’s given.
Unfortunately, this means AI is vulnerable to bad influences, too.
Gender bias appearing in AI has shown to be prevalent; Harvard Business Review cited an example of word-embeddings in the natural language processors present in voice assistants like Siri and Alexa, associating ‘doctor’ with ‘man’ and ‘nurse’ with ‘woman’. This is not only inaccurate, but an uncomfortable reminder that AI can mirror some of our harmful biases and behavior.
Where did AI learn gender bias?
Much like an impressionable child, AI tends to parrot what it picks up from its environment – in other words, the dataset it’s given. If not enough women appear in a particular set of data, the AI isn’t able to tell that the data is flawed or not representative of the truth.
AI can also learn gender bias in other ways:
- Development: Without enough women in tech, it’s easy for gaps to be overlooked during development, and to create products that aren’t balanced towards the population they’re meant to serve.
- User input: AI doesn’t make decisions based on what society aspires to be, only what it currently is. If enough users input biased information, AI will change to reflect people’s current views, however flawed or incorrect they may be.
At present, AI can only view the world through a narrow lens, and it is always learning. That’s why it’s so important that we are conscious and intentional about what we teach it.
Does it matter if AI is gender biased?
Yes. The real-world consequences of gender bias in AI go far beyond assuming an engineer is a man and a receptionist is a woman.
A 2022 study by University College London revealed that AI models built to predict liver disease using blood tests are twice as likely to return false negatives, or miss the disease, in women as in men. Imagine putting that AI to use in a hospital; many patients – women – would not get properly diagnosed and receive the help they need, leaving them in danger.
In another high-profile case, a computer vision system for gender recognition reported higher error rates when recognizing women overall, and more specifically women with darker skin tones.
Steps everyone can take to stop bias in AI
The answer to this million-dollar question is simple, but requires active change on many levels. Put simply, we need more diversity in tech, and we need to make sure our data reflects the path we choose as a society, not our past mistakes.
Update and vet data regularly for gaps. AI learns based on what it’s given, so we need to give it clean, untainted data. Old data should be culled or gated in such a way that the AI understands the information is outdated.
Create standards and frameworks to eliminate gender bias. Change needs to happen from the ground up; with proper frameworks and standards in place to check and eliminate gender bias, developers can create algorithms that are neutral.
Champion gender equality in education. Not only should women be encouraged to pursue careers in STEM, but everyone involved in the development of AI and technology should be educated on how to look for blind spots where data may be missing or inaccurate.
Spread awareness and keep asking questions. It’s important to ask questions and not take AI at face value, so we can continue to check the biases we may be exposing ourselves to.
By taking these steps, organizations can help ensure that their AI models aren’t reinforcing harmful stereotypes, and instead building towards a fairer, more just world.
Cultivating and supporting diversity wherever possible
Cognizant Netcentric is committed to tackling bias in the workplace as well as the world at large. “We pride ourselves on cultivating a diverse team where individuals are equally valued and respected, regardless of their background or identity.” - Katrin Weissenborn, Diversity & Inclusion Circle Lead.
Interested to learn more about us? Head to our Diversity & Inclusion page to find out more!