Briefing #26: Is Your "Perfect" AI Workflow Ready to Fall Apart?
The enterprise obsession for zero errors might be the biggest failure mode.
Note: This briefing was originally published on LinkedIn on January 30, 2026. It has been migrated to our new home on Substack to create a complete archive. Multi-format features like video and audio commentary are available for all new briefings published from April 2026 onwards.
In the world of structural engineering, there’s a critical difference between strength and resilience. You could build a bridge out of a material that is incredibly strong but perfectly rigid. It would stand flawlessly for years, right up until the day a unique, high-frequency vibration — from an unusual wind pattern or a specific traffic load — hits its resonant frequency.
The bridge would shatter catastrophically. A resilient bridge, by contrast, is designed with built-in flex and dampening systems, allowing it to bend without breaking.
Many organizations today are building “brittle bridges.”
In our quest for AI-driven efficiency, we are architecting “perfect” processes that handle 99% of cases flawlessly. We automate the invoicing system to process an exact data format. We build a chatbot to answer a specific list of questions. But this pursuit of perfection creates extreme fragility. The system works, until it doesn’t.
A supplier adds a single new field to their invoice. A customer asks a novel question. Or, a small innocuous typo appears in an input table. The whole workflow grinds to a halt.
Contrary to what we might think, this isn’t a failure of technology. It’s a failure of imagination. We’re applying an industrial-era, assembly-line mindset to a dynamic, digital world.
An AI-native organization operates with a different mindset. Instead of building processes, they design systems. A process is rigid; it breaks on exceptions. A system is adaptive; it learns from exceptions.
I saw this firsthand in my time in financial services. Banks often feel compelled to ensure nothing ever fails. And in so doing, they apply major constraints on customer interactions. To verify your identity correctly, you must provide the exact full name you used when you got your account. You must know the exact date a transaction occurred. Or, heaven forbid, there’s an Internet outage – you might have to enter your carefully provided input all over again.
A more resilient approach might consider inputs as probabilities, rather than absolute truths. “Unexpected” inputs get flagged as exceptions, but they don’t kill the transaction. They might get routed to a person – or an AI sub-agent – to see if there’s a means to disambiguate what’s been provided. But they don’t penalize the user because he might have forgotten to include his middle name.
We all know from our experience that few things in life are certain. If we’re looking to build AI workflows that move the needle, the goal isn’t to build AI that never fails. It’s to build systems that never stop improving.



