Briefing #13: AI's Superman Syndrome
Should AI be more like Superman or more like Iron Man?
Note: This briefing was originally published on LinkedIn on October 17, 2025. It has been migrated to our new home on Substack to create a complete archive. Multi-format features like video and audio commentary are available for all new briefings published from April 2026 onwards.
Suspend your disbelief for a second: 1981’s Superman II has something to teach us about the current state of AI in the enterprise.
In the film, there’s an arc where Superman (Clark Kent) voluntarily relinquishes his powers to live a normal life. He gets into a fist fight with a trucker at a roadside diner. And, without his powers, he’s handily beaten. The look on Clark’s face isn’t just pain — it’s profound shock from being forced to confront his own, sudden limitations.
There’s a version of that scene that’s now playing out in the workplace.
Harvard Business Review recently published research indicating workers, following the use of generative AI tools, saw their intrinsic motivation drop by 11% and boredom at work increase by 20% when asked to engage in tasks without AI assistance.
What might we take away from this? In the race to adopt AI, we’ve adopted tools that have indeed granted our teams superpowers, but inadvertently overlooked the fine line that exists between empowerment and dependency.
This is AI’s Superman Syndrome: a condition where employees, rapidly accustomed to the power of AI to help them tackle complex work, become disengaged in the face of tasks that might require them to persist without their newfound powers.
It turns out this is about more than just the stress of having to do more work to get to a result. It’s the emergence of a phenomenon psychological researchers call “cognitive dependency” that’s a result of the “cognitive offloading” enabled by AI. A recent ZDNET/Upwork survey found a particularly telling paradox: heavy AI users are 88% more likely to experience burnout.
When we consistently outsource our thinking to a tool, our own mental muscles for certain tasks can begin to atrophy. When AI is unavailable or unsuitable for a task, the work feels disproportionately difficult — not because the task has changed, but because our tolerance for the cognitive effort has diminished.
This insight underpins an unexpected strategic choice that all leaders must now make: what superpowers shall we bestow to our organizations?
One path is Superman: the pursuit of cognitive replacement. We design AI that does the thinking for the team. It effortlessly delivers answers and automates the cognitive work needed to make effective decisions.
A second path is Iron Man: the pursuit of cognitive augmentation. We design AI like a “power suit,” enhancing and amplifying a team’s ability to think.
Let’s set aside the superhero dilemma of whether a DC Comics or Marvel superhero shall reign supreme. The Superman route may be seductive because it promises instant gain, but it’s the path that can also lead to skill atrophy and dependency. The Iron Man route conceives of AI as a force multiplier on a team’s already inherent genius by synthesizing complex data, vetting options, but ultimately requiring the human operator to apply that final layer of essential judgment.
Put simply, the Iron Man path leads to long-term capability while the Superman route might lead to long-term dependency.
MIT recently found that 95% of AI enterprise pilots fail to demonstrate positive ROI. This isn’t an indictment of AI per se. It’s indicative rather that AI is all too often sadly seen as a strategic lever to replace instead of enhance. That rare 5% of pilots that are successful are built on a different premise: recognition that the goal isn’t to replace human intelligence at work, but instead to amplify it.
Deciding to take the Iron Man path is an act of deliberate, human-centered design. It requires leaders to mandate that AI systems must not just provide answers, but must also serve as agentic teachers that can explain their logic. It means prioritizing “cognitive cross-training,” where we create conditions where teams can collaborate, interact, and take on problems without AI to build resilience in problem-solving muscles.
And it might mean we’ll have to consider KPIs beyond simply measuring “AI adoption” rates to measuring the growth and capacity for teams to absorb and critically evaluate information, and then to exercise strategic judgment.
Let me be clear: I’m not saying we should avoid granting teams AI superpowers.
I’m just saying that the most resilient, most “AI first” organizations will be the ones who have the foresight to build teams of Iron Men, and not just legions of dependent Supermen.



