Briefing #17: The Flaw in AI's Mirror
Why AI bias makes business operationally unsound
Note: This briefing was originally published on LinkedIn on November 14, 2025. It has been migrated to our new home on Substack to create a complete archive. Multi-format features like video and audio commentary are available for all new briefings published from April 2026 onwards.
One of my most enduring lessons from graduate school came from the education I received in basic research methods.
As a grad student, you learn that bias is an ever-present phenomenon. Bias is impossible to eradicate completely. The best we can do is seek to minimize it, recognizing that being vigilant and responsive to bias begins with the difficult admission that we all possess it, regardless of how objective and rational we think we are.
Today, as AI assistants, tools, and workflows are being rolled out, our ability to stay on top of bias is being outstripped by the pace at which data is being consumed to deliver it. In our rush to deploy more AI, the cultural assumptions, blind spots, and historical patterns of our world are being embedded directly into the foundational models we’re depending upon.
Two popular, well-cited studies of several of OpenAI’s GPT models published in PNAS Nexus and Harvard University showed that even the most advanced LLMs exhibit a significant cultural slant toward English-speaking, western countries (the Harvard study refers to this as a “WEIRD” - Western, Educated, Industrialized, Rich, and Democratic - bias), a reflection of the values where these models were created and the training data that was used to build them.
Given how these and other models have so quickly become the basis for most commercial AI solutions, AI bias has crossed the chasm from being an intellectual curiosity to a real, pressing problem that organizations must now face, with real financial and legal implications.
Consider the collective action lawsuit against Workday, which made headlines earlier this year. The suit alleges its AI-powered screening tools systemically discriminate against applicants based on age, race, and disability.
Or look at the SafeRent case. In November 2024, the company agreed to a $2.3 million settlement after its tenant-screening algorithm was found to disproportionately reject applicants who were using housing vouchers.
In both examples, AI wasn’t just “unfair.” It was rejecting qualified candidates and credit-worthy tenants. It failed at its one job, turning away potential business and creating massive financial and legal liability in the process.
Let me be clear: I’m not making a moral judgment on any specific set of values. Rather, this is about the risk that gets created when any single, dominant perspective creates operational blind spots. It means the AI you’re seeking to deploy may be fundamentally misaligned with the very customers, employees, or partners you’re trying to serve in a global market.
This is why bias is an issue that requires strong leadership. If bias has become a concern for you, here are a couple of suggestions to consider:
First, mandate transparency from AI vendors. Ask for governance reports, independent audit results, and clear explanations of data they used for training. Look to AI standards like the NIST AI Risk Management Framework to understand some of the key considerations that play into the trustworthiness and reliability of AI models.
Second, make your AI team cross-functional. An enterprise “red team” composed solely of AI engineers can’t recognize bias in all its forms. A team that includes functional areas such as HR, legal, operations, and sales brings a broader view of real-world blind spots that a team of developers may never see.
Ultimately, the problem isn’t that AI is biased. It’s that we are. AI is simply a mirror that scales our own blind spots. Our job as leaders is to be honest about the reflection it shows us and to be vigilant in managing the cracks it reveals.



