AI Isn’t Neutral: Gender Bias in the Machine
- Alice Tooley
- Aug 8
- 5 min read

Is AI going against women? A 2024 study made by UNESCO suggests that AI is not neutral. As it increasingly influences areas like education, healthcare and employment, it has started to reflect existing inequalities – particularly gender bias. And the worst part is, it isn’t a glitch – it’s baked into the design.
As UNESCO’s report points out, there are “significant risks for women due to bias in, and misuse of, AI systems” (p. 5).
That means that the very algorithms put in place to train AI are contaminated by biases. Because sadly, when these tools are not subject to rigorous review, they don’t challenge inequality – they reinforce it.
The Hiring Game: Women At A Disadvantage
“Recruitment algorithms…have shown a tendency to favor male candidates” (p. 5).
Imagine uploading your CV and never hearing back. You’re qualified? Sure. But the algorithm liked the guy next to you more. And the worst part? This isn’t a futuristic scenario – its already happening.
This is because many AI models are trained using historical employment data, which often reflects decades of male-dominated patterns. In other words, if men have been favoured in a particular sector in the past, the algorithm will most likely learn to do the same – even if a woman is more suited to the role.
“Men are 5x more likely than women to be offered online ads for high-paying executive jobs” (p. 25).
Not only might women be less likely to be selected for their dream roles, they may not even see the job listings in the first place. As a result, many women are being denied opportunities before the recruitment process even begins.
And if you’re not even given the chance to apply, what happens when the system starts making bigger decisions?
The Job Risks
But are you at least safe if you already have a job? According to UNESCO’s report, you aren’t.
It points out that women dominate administrative positions, and these are the most vulnerable to AI. As AI systems become more capable of repetitive tasks, women’s jobs become more at risk.
“In the absence of reskilling opportunities, this could result in elevated rates of job displacement among women” (p. 29).
In other words, AI-driven inequality can lead to other related problems. For example, although AI companies are growing, women are facing significant barriers that stop them from entering these new spaces. According to UNESCO’s study, the underrepresentation of women in AI-related fields plays a key role in the gender bias of this technology.
How can AI ever be unbiased if women are still sidelined from shaping it?
Policing, Surveillance, and Safety Concerns
This bias becomes especially dangerous in areas like policing and surveillance, where overreliance can lead to dangerous consequences.
In parts of the world, AI has become an integral part of policing systems, being used to spot future domestic violence patterns and warn women about places they should avoid for their safety. For this, it can be helpful.
However, this usefulness is limited. For one thing, the tech isn’t always accurate, which can sometimes put people at greater risk than before. Also, when these tools are made easily accessible, there’s a chance they could be misused for dangerous purposes – for example, by someone trying to track down a person who’s actively aiming to stay away from harm.
UNESCO also raises concerns that “facial recognition systems perform poorly when dealing with female voices and faces, increasing the risk of exclusion and discrimination in accessing services and technologies” (p. 5).
This isn’t just about technical failure – it has real consequences. Women could be wrongfully flagged or arrested – and feel like public spaces aren’t really theirs to move through freely.
AI’s Health Blindspot
You’d hope that AI in healthcare – where lives are literally on the line – would be held to the highest standards. But, unfortunately, gender bias pops up here too.
UNESCO has found that "the data needed for women's health in the 21st century is missing" (p. 26).
This means that it is harder to accurately diagnose those with conditions that disproportionately affect women. Most notably, as UNESCO points out, girls and women with “disabilities” or who are from “racialized or minoritized groups” are the most vulnerable (p. 27)
This creates a bias that fails basic fairness – it sets women up to slip through the net, and that could have serious, even dangerous, consequences.
AI Bias In Action


Here is an example from our own brand. We tried to generate an AI image of women in football shorts. After multiple attempts, it became clear that – to the AI – football was a man’s game. The women were consistently depicted in hypersexualised ways that had nothing to do with the sport.
Instead of strong, athletic players, we got suggestive poses, unrealistic bodies, and outfits better suited to a music video than a match.
"It felt really uncomfortable and inappropriate - especially since I was specifically using the term 'girls' rather than 'women'" noted one of our team members.
It was a clear sign that the AI had learned its cues from a biased internet – one where women in sport are overlooked and objectified.
So, Is AI the Problem? Or Are We?
It’s easy to blame the technology itself, but AI doesn’t just appear out of nowhere. People build it. And, as UNESCO’s study has highlighted, when the people behind the tech fail to account for bias, the tools they create will reflect those blind spots.
But, the solution isn’t to stop using and creating AI, but to build it better. UNESCO’s report suggests several ways to do this, including:
To think about gender at every stage of AI development
To expand support programs that help young women and girls get involved
To make sure AI-driven risks are addressed and corrected
To build databases that are easy for everyone to access and use
(pp. 48-52)
Ultimately, the key to better AI isn’t just new technology – it’s about holding the people behind it accountable with clear rules and strong guidelines to keep unfair gender bias out of the algorithms.
As UNESCO’s report states – “It is of the utmost importance that the business community undertakes a comprehensive assessment of its human rights responsibilities from a gender perspective with respect to the potential risks posed by AI” (p. 36).
References
UNESCO. (2024). UNESCO Women for Ethical AI: outlook study on artificial intelligence and gender. Unesco.org. https://unesdoc.unesco.org/ark:/48223/pf0000391719.locale=en
Comments