The Algorithm Is Not Neutral
What Generative AI Gets Wrong About Women, Work, and Leadership
Hi, I’m Bonnie Marcus. Welcome to Own Your Ambition, a weekly newsletter where I offer my best advice on how to successfully navigate the workplace as a woman today. A former CEO who made it to the C Suite from an entry level, executive coach and published author, I share my experiences and lessons learned from my tenure in corporate, focusing on giving women proven tools and strategies.
I also write about gender issues in the workplace and beyond. This newsletter, based on my interview with Dr. Ann Olivarius and a recent Stanford University study, focuses on how Generative AI affects women of all ages in the workplace today.
Generative AI is quickly becoming embedded in how companies recruit, evaluate, and promote talent. Employers are turning to these tools hoping technology will finally solve one of the most stubborn problems in the workplace: how to make fair, unbiased decisions about people.
But according to Dr. Ann Olivarius—an internationally recognized expert on discrimination, institutional accountability, and gender equity—generative AI isn’t solving bias. It’s scaling it.
Recent research from Stanford University confirms what many women already suspect: generative AI systems don’t simply reflect workplace inequality. They reproduce and reinforce it, particularly for women—and especially as women age.
As Dr. Olivarius put it bluntly in our conversation:
“Generative AI hurts women at work at every age, because it distorts reality according to existing stereotypes.”
The Stanford Study: Bias at Every Stage of a Woman’s Career
The Stanford study found that when generative AI is asked to produce images or descriptions of professionals, it consistently relies on outdated gendered assumptions.
Ask AI to generate a “nurse,” and it almost always produces an image of a young, inexperienced woman—not an older, highly trained, authoritative professional. Ask for “senior leaders,” and women are disproportionately absent.
“In the eyes of AI,” Dr. Olivarius explained,
“If you’re a working woman, you’re disadvantaged either way. If you’re older, you don’t exist at the higher rungs of the ladder. If you’re young, you’re portrayed as younger and less experienced than you actually are.”
This distortion is particularly striking given real-world data. Women outlive men, and there is no meaningful age gap in workforce participation. Yet AI presents a fictional version of work where women vanish as they gain experience.
“That misrepresentation,” Dr. Olivarius noted,
“is remarkable—and worrying—for its discriminatory implications, especially as these tools are increasingly used in recruitment.”
How Gendered Ageism Gets Embedded in AI Systems
Exactly how large language models like ChatGPT generate their outputs is a trade secret. But the results are not ambiguous.
“AI doesn’t just reflect the world’s existing biases against women,” Dr. Olivarius said.
“It actively inserts those biases into its models for solving new problems.”
One example is especially troubling. When ChatGPT is asked to generate large numbers of CVs for different roles, it consistently produces shorter, less experienced resumes for women.
“To the algorithm,” she explained,
“a female CV is simply supposed to be more junior. Women with senior experience or high qualifications are not something the model readily imagines.”
The bias compounds itself. When asked to evaluate the same CVs it generated, ChatGPT then rates the male candidates as more attractive hires.
“Past norms and stereotypes become baked into what the model claims is true about the present and future,” Dr. Olivarius said.
“And then those false assumptions are treated as neutral judgment.”
The Illusion of ‘Objective’ AI in Hiring and Promotion
This matters because generative AI is already deeply embedded in workplace decision-making. An estimated 87% of employers now use some form of AI in recruitment or promotion.
The danger, Dr. Olivarius warned, is that employers treat AI outputs as objective.
“Companies must accept that generative AI is biased,” she said.
“They should never consider an AI ranking of CVs as dispassionate or free of prejudice.”
She understands why leaders want to believe otherwise.
“I get why employers hope AI will solve the long-standing problem of fair hiring,” she told me.
“But there is no tech fix—only the same old human fix.”
If organizations rely uncritically on AI tools, they risk making decisions based on discriminatory and inaccurate assumptions, while believing those decisions are neutral.
Why Superficial Fixes Don’t Work
Some companies respond to these concerns by adding filters, safeguards, or bias flags to their AI systems. But researchers caution—and Dr. Olivarius agrees—that these measures are too shallow.
“Fixing bias in generative AI is like trying to fix a hammer with the hammer you’re trying to fix,” she said.
An “objective” dataset doesn’t exist because the world itself is not objective or unbiased.
“You can’t exercise away a bad diet,” she added.
“And you can’t fix biased data with bolted-on safeguards. The existing data always wins.”
Because generative AI decision-making is so complex, targeted technical fixes can give companies false confidence—while biased outcomes continue.
Where Responsibility Really Lies
Rather than trying to “fix” AI, Dr. Olivarius argues that companies must turn inward.
“The best thing employers can do,” she said,
“is get back to the basics of identifying and eradicating bias in their own organizations.”
That means scrutinizing hiring, promotion, and evaluation data—and being honest about where women’s careers stall.
“Done right,” she explained,
“looking for AI bias becomes an exercise in looking for our own bias.”
If an organization is actively reducing discrimination, biased AI outcomes will stand out rather than blend in.
She also raised a more uncomfortable possibility:
“It may be that recruitment and promotion are simply not good use cases for generative AI.”
No AI Shortcut to Equality
At the dawn of every major technology, we tend to overestimate what it can do. Generative AI can solve many problems—but inequality is not one of them.
“AI can’t fix injustice we haven’t fixed ourselves,” Dr. Olivarius said.
“There is simply no AI shortcut to equality—or to choosing the right candidate.”
Employers cannot outsource judgment, accountability, or fairness to machines.
The responsibility still lies with humans.
And when it comes to women’s careers—especially in hiring and advancement—that responsibility has never been more urgent.
Dr. Ann Olivarius is the Chair and Senior Partner of McAllister Olivarius. She is a Solicitor of England & Wales and Ireland and U.S. attorney licensed in New York, Virginia, Minnesota, New Hampshire, Idaho and in the District of Columbia/ Washington, D.C.
Ann has over 30 years’ experience as a lawyer, financier, philanthropist, writer, and advisor to public figures. With a background in both civil rights law and corporate law, she specialises in cases of discrimination, sexual harassment and assault, and online abuse.
In December 2022 she was approved an Honorary King’s Counsel for her “leading role in the fields of women’s rights, sexual harassment and sexual abuse” and was awarded Officer of the Order of the British Empire (OBE) in the New Year’s Honours List in recognition of her services to Justice, Women and Equality.



Great article! This is such an important topic!