Tackling gender bias with AI: Strategies for a more inclusive future

What does it say when 75-100% of AI-generated images for words like “engineer”, “scientist”, “mathematician”, or “IT expert” show men?

We live in an era of increasing recognition of the need for gender equality. And despite the progress that’s been made, the fight for inclusion is an ongoing one. Gender bias and stereotypes can seep into technology through the data and approaches used to build it. As a result, these technologies may not only reflect existing biases, but could even amplify them.

Left unchecked, these biases can have far-reaching consequences, shaping societal perceptions and perpetuating harmful stereotypes. In the age of rapid AI advancements, it is more critical than ever that we scrutinize the role generative AI plays in perpetuating our stereotypes. We must explore strategies to mitigate these biases and even harness AI to drive positive social change.

How AI tends to amplify existing biases and stereotypes

Technology is a reflection of our society. Because AI is trained on existing data, it’s easy for bias to permeate AI-generated content and results. Countless studies in the last few years have shown that generative AI amplifies existing biases in their output. A study by Leipzig University and AI startup Hugging Face used 3 popular AI image-generating models to generate 96,000 images of people using different terms and saw that when given a prompt like “CEO” or “director”, 97% of the generated images showed white men.

“My concern is that AI algorithms may reinforce gender stereotypes by generating images that conform to social norms and expectations,” says Katharina Schweighart, Cognizant Netcentric’s Head of Delivery. “For example, if the algorithm is biased towards associating certain attributes or occupations with specific genders, it may generate images that reflect these stereotypes, and therefore we will further establish certain gender biases in society.”

“As someone with a doctorate in AI, I've observed instances where GenAI has perpetuated stereotypes about women,” says Pablo Almajano. “Ensuring fair and accurate representation demands everyone, developers and users alike, to remain vigilant and proactively identify and address these occurrences with care and consideration.”

This has become such a heated topic that in the last weeks, Google tried to correct the functionality of its own Gemini AI tool, but it backfired. In an attempt to push more diversity into its generated images, it started generating images of women and people of color when given prompts like “American founding fathers” or “WW2 soldiers”. The inaccuracy and erasure of real historical discrimination caused even more backlash – and it’s not the answer to promoting diversity in generative AI.

Steps we can take to mitigate bias in the use of AI

When the global market value of AI-powered marketing is expected to grow to over $100 billion in the next 3 years, the question of how to design ethical and inclusive AI is critically important. As we can see from this Gemini example, diversity, and gender equality cannot be shoe-horned into an algorithm.

So what can we do to help ensure that AI is not exacerbating negative biases, but driving us towards the equitable future we want to create? Each of us has a role to play:

  1. Invest in building diverse, representative sets of data to train AI. The quality and accuracy of AI output are directly linked to the data it’s trained on. Making sure that data sets include proper representation across gender, race, culture, and other demographics will help us address the gaps.
  2. Commit to transparency and frequent auditing of AI models for bias. AI models need to be transparent and explainable such that everyone using them can understand how they behave as they do. Teams using AI should maintain a critical lens for this new and rapidly developing technology.
  3. Use more sophisticated prompts to train AI models. Being mindful of using inclusive language in prompts, like “all genders” rather than “men” or “women”, will help train AI models over time to produce more inclusive, neutral results.
  4. Champion gender equality in education and technology design. This last point is the most important one, underpinning everything that we are doing to advance our technology and societies. Encouraging women to pursue STEM and making those environments inclusive for them is the most authentic way to foster diversity from the very design of new AI technologies.

Championing diversity and inclusion at every level

Working at the forefront of new digital strategies and technology, Cognizant Netcentric is deeply committed to fighting bias in our work and the societies we impact. We believe in harnessing the power and reach of GenAI to inspire inclusion and promote equality in the world.

As Katrin Weissenborn, Senior Marketing Manager and Diversity & Inclusion lead at Cognizant Netcentric, says: “Exploring the depths of GenAI raises critical questions about biases, especially regarding the portrayal of women. As we explore this new environment, it's crucial to strive to recognize -and rectify- any distortions in representation. We must build a brighter future, rather than regress into the past.”

Learn more about what we do as a business, and how we champion diversity & inclusion in the workplace.