Briefing #25: The Fragility of AI Efficiency
Organizations looking to become AI-native are also at risk of becoming purely dependent on AI.
Note: This briefing was originally published on LinkedIn on January 23, 2026. It has been migrated to our new home on Substack to create a complete archive. Multi-format features like video and audio commentary are available for all new briefings published from April 2026 onwards.
I recently had a conversation with a senior executive who was reminiscing about his early career, a time before desktop computers were ubiquitous. He confessed a nagging fear. “I worry,” he said, “that we’re losing our knowledge of first principles.”
“With AI, will we be able to solve problems on our own ever again?”
That’s a question I’ve pondered since. It’s timely to be sure – we see a version of this playing out in headlines about the “AI cheating epidemic” in colleges and universities, where course instructors are becoming increasingly reliant on oral exams because their confidence is waning around students’ willingness to do assignments and written tests unaided.
As one undergraduate student recently wrote in a powerful plea in Maclean’s Magazine, a culture of “copying, pasting, rephrasing and submitting” is creating an environment where software talks to software, while learning becomes an afterthought.
Other publications like the Harvard Business Review are talking about “cognitive offloading,” or the reliance of skilled talent at work on AI to do tasks they used to do just fine themselves, leading to burnout – or worse.
For enterprise leaders, the well-intentioned desire to show measurable results from our AI investments is a risk vector for creating this dynamic. We are building a generation of “AI-literate” employees who are exceptionally good at prompting an AI, but we might be failing to cultivate “AI-fluent” teams who know how and when to apply sound judgment to its output.
Cognitive offloading in an organization is the atrophy of the critical thinking, problem-solving, and first-principles reasoning that an organization needs to survive a true crisis. In the desire to become “AI-native” (or “AI-first,” pick your term), we’re creating teams that are poised to be incredibly efficient at scaling problems AI already knows how to solve, but that become dangerously fragile when faced with a novel challenge that requires a human to step up.
The antidote is not to reject AI. It’s to be more deliberate and strategic in how we deploy it. We must stop thinking of “AI-native” as a buzzword for “deploying AI everywhere.” A more practical and resilient definition is: building an organization around the unique and complementary skills of humans and AI.
How can leaders act on this today? Here are two practical places to start:
Design “Graceful Workflows.” Instead of building workflows that are fully dependent on AI, design them to be “graceful.” A graceful workflow is one that can be powerfully augmented by AI, but that could still be fully executed by your human team if push came to shove — if the cloud goes down or the model starts to drift. This builds resilience directly into your operations.
Trial with Humans First. Before you hand a critical workflow over to an AI agent, have your most trusted human experts run the process manually for a week (or more). This does two things: it ensures the workflow and customer experience actually make sense, and it allows a designated “red team” of your best people to vet the process and identify the real risks before automation scales them.
This measured approach ensures that AI serves the business, not the other way around. It allows us to gain the efficiencies of AI without sacrificing the human judgment that ensures our long-term resilience.



