Briefing #20: Beyond "Yes" or "No"
It's not about whether you use AI, it's about when and why you use it.
Note: This briefing was originally published on LinkedIn on December 5, 2025. It has been migrated to our new home on Substack to create a complete archive. Multi-format features like video and audio commentary are available for all new briefings published from April 2026 onwards.
I once had a customer, a senior leader on the cusp of retirement, who told me, “I’m sure our company must change and embrace new technologies. But not while I’m still here and have anything to say about it.”
It was a bold, candid admission. It’s also a perfect illustration of the perilous, personality-driven environment in which many organizations operate, where real transformation opportunities often go to die.
Today, we see this kind of human friction creating a new stalemate in the face of organizations seeking to wrangle and reason about AI. A polarized, all-or-nothing debate around AI has emerged, trapping organizations in a cycle of inaction. This isn’t an “AI divide” born from technology, but a leadership divide born from hubris.
On one side, we have “AI shamers.” These are the folks who proudly point out the perceived tics of AI-written content, as if spotting an em-dash or a certain prosaic pattern makes them a guardian of authenticity. They’re also the ones who fear most the decline of work quality through reliance on AI as a crutch.
On the other, we have “AI evangelists” who insist AI must immediately take over all “low-value” work, a dangerously subjective term. Who, exactly, gets to decide what’s “low-value?” They’re the ones who fear most about becoming irrelevant and being left behind.
This binary thinking — this “my way or the highway” mindset — is a failure mode. It’s the same impulse that drives leaders to tear down the ideas of others just to assert their own authority. Much has been said in the past year about the importance of executive alignment in driving successful AI adoption, and yet, many organizations struggle to achieve said alignment because debates around AI can be so charged they turn into sparring matches over who’s right and who’s wrong instead.
This is where the real villain of AI transformation is exposed: the people-driven impulse that values being right over getting it right.
Pragmatism teaches us neither polar position is correct. Leadership wisdom suggests resisting a simple “yes” or “no” to AI gets us to wrangle a more difficult, nuanced answer: “It depends.”
In this context, “it depends” isn’t an act of avoidance but the start of a strategic diagnosis about what role AI plays inside an organization, and where and how it can be employed to achieve the best outcomes. And equally important, what areas are off-limits to AI:
It depends on the problem you’re solving.
It depends on the risk of the workflow.
It depends on the data you have.
It depends on the business outcome you need.
For example, using AI to co-author a low-risk internal memo is a completely different strategic decision than deploying an agent to autonomously handle a high-risk financial compliance workflow.
One outcome values speed and “good enough” efficiency while the other demands 100% accuracy, where a single error or hallucination could be catastrophic. Without asking when and why, leaders are just blindly adopting technology (or just as blindly, refusing to consider it), rather than strategically deploying it.
The most valuable leaders in the next decade won’t be the ones who can spot an AI-generated email. They will be the ones who fight hard for the alignment that requires teams, leaders, and organizations to get past hubris and force a dialogue that confronts the real issues at hand.
The answer to “Should we use AI for...?” isn’t “yes” or “no.” It’s “Here’s when.”



