AI as a Contrast Agent
Scroll through LinkedIn on any given morning and you will find people proudly sharing their AI workflows: inbox summaries generated in seconds, meeting notes they never read, research briefs condensed without checking the sources. The tone is celebratory. Look at what I can skip now.
John Willshire has a name for what is being skipped: Cognitive Debt, forgoing the thinking to get the answers.1 The term has resonated widely, and for good reason. But here is what interests me more than the debt itself: why are people so eager to take these shortcuts? The answer, in most cases, has little to do with the tool. Their mornings had no room for careful reading long before AI arrived. The pressure was already there. AI just made the shortcut available.
I keep coming back to this pattern. Every time the discourse surfaces a new concern about AI and cognition (and the list is growing), the concern points at the individual. You are offloading too much. You are accumulating debt. But when you look at the conditions under which people reach for these shortcuts, the picture shifts. The problem was already there. AI made it visible.
In medicine, a contrast agent is a substance you inject to make internal structures show up on a scan. It does not create the tumor. It makes the tumor legible so you can act on it. I think AI is doing something similar to our work systems, our organizations, and our broader social infrastructure. And the mechanism is acceleration. Because AI makes shortcuts so much easier, we take them more often, more visibly, more consequentially. Research that took days now takes minutes and gets sent without review. Decisions that required deliberation get delegated to a prompt. The speed is what makes the underlying patterns impossible to ignore. The discourse around AI (the fears, the studies, the new terminology) is the resulting scan. What it shows is not primarily about the technology. It is about us.
What individuals see
The Cognitive Costs of AI have produced five concepts in under two years, each more alarming than the last. Every one of them diagnoses the individual mind, and every one prescribes an individual fix: think harder, resist the shortcuts, be more disciplined.
But look at who is taking these shortcuts and why. A 2026 survey found that 62 percent of UK executives use AI for “most decisions.”2 That statistic is usually cited as evidence of surrender. I read it differently. These are people whose calendars are stacked from 8 to 6, who process hundreds of emails daily, who are expected to have opinions on everything from brand strategy to regulatory compliance. They were already operating at the edge of their cognitive capacity long before ChatGPT arrived. AI gave them a way to cope that was previously unavailable.
Igor Schwarzmann puts it well: “Perhaps instead of performative criticism about AI ‘diminishing real work,’ we should acknowledge that not all work needs to be, or ever was, profound. The real conversation should be about why we’ve created a culture that demands constant exceptionalism in even the most routine tasks.”3
What the contrast agent shows at this level: Cognitive Debt is not a sign of individual weakness. It is a symptom of work structures built around the assumption that people can sustain peak cognitive performance indefinitely. AI did not create that assumption. By making the shortcuts frictionless, it revealed how deep the assumption runs.
What organizations see
At a mobility congress in Berlin earlier this year, a senior executive told me she has absolutely no time to explore what AI could do for her work. The irony was hard to miss. AI tools could help her manage the very overload that prevents her from learning about them. This is a chicken-and-egg problem, and it is not about the technology. It is about how her organization defines what counts as productive work, and who gets the time to do it.
The NOBL Collective’s research on AI and work redesign points to a striking finding: workflow redesign has the strongest correlation with positive financial outcomes from AI adoption.4 Yet most companies skip it entirely. They deploy tools into existing structures and hope for the best. This tells us something about those structures: they were never designed for deliberate knowledge work in the first place. Work accumulated through historical accident, email chains, and org chart inertia. AI makes this visible because it forces a question that was never asked: what are we actually doing here, and why are we doing it this way?
AI also creates genuinely new problems at this level. NOBL documents an “Authority Migration” where decision-making power shifts to whoever controls the AI system parameters, often without any conscious delegation by leadership. That is not a pre-existing condition. But the reason organizations are vulnerable to it is that most never clarified where decision-making authority should sit in the first place. The acceleration exposes these gaps, and sometimes widens them.
What society sees
The largest study of what people want from AI surveyed 81,000 participants across 70 languages. The researchers expected to learn about use cases. What they got, as Carlo Iacono wrote in his analysis, was “a mass confession about human bottlenecks.”5
People did not describe wanting smarter software. They described wanting time and attention back, relief from what Iacono calls “the executive-function overload of modern existence.” Eighty-one thousand people, across seventy languages, said roughly the same thing: “I need help, and you are what is available.”
The study found that in wealthier countries, AI is becoming a cognitive welfare tool for overloaded professionals. In lower-income countries, it functions as a leapfrog mechanism for skills and income. Across both contexts, people are turning to a technology to fill gaps left by institutions that were supposed to provide education, healthcare, career development, and social support.
Iacono’s conclusion: “The 81K study reads less as a report about what people want from AI than as a diagnostic scan of a civilisation that outsourced its care infrastructure, overloaded its workers, defunded its public spaces, and then watched a machine move into every gap it left empty.”
At this scale, the contrast agent shows something beyond individual cognitive debt or organizational dysfunction. It shows a mismatch between how institutions are designed and what people actually need to function.
Reading the scan
Across all three levels, the same pattern repeats. Individual cognitive struggles point to unsustainable workloads. Organizational dysfunction points to work that was never deliberately designed. Societal gaps point to institutions that stopped serving the people they were built for.
The productive question shifts. “What is AI doing to us?” keeps the focus on the technology and produces an endless supply of alarming terminology. A more useful question: what is the AI discourse showing us about the systems we have built? And what do we do with what we see?
A contrast agent is only useful if someone reads the scan. The technology has done its part: by accelerating everything, it has made the structures underneath impossible to ignore. What follows requires something AI cannot provide: the subjective judgment to decide what to protect and what to redesign. That is human work. And it cannot be delegated.
Connections
The Cognitive Costs of AI maps the terminology that the AI discourse has produced and traces the escalation from neutral observation to capitulation. This note asks what that escalation reveals.
AI and the Expansion of Work traces the mechanisms through which the intensification becomes concrete: Jevons effects, context-switching costs, brain fry, and an expanding space of possibilities. The contrast agent makes the pressure visible; the expansion of work describes how it accumulates hour by hour.
Meaningmaking names what remains uniquely human in this landscape: the capacity for subjective value judgments. If the contrast agent shows where systems are failing, meaningmaking is the capacity we need to decide what to build instead.
Open Questions
If AI makes systemic problems visible, who has the responsibility to look? Individual knowledge workers feel the pressure but rarely have the power to change the conditions. Executives have the authority but are themselves caught in the same system. Is there a role for the AI discourse itself as a catalyst for organizational and institutional redesign?
And how do you distinguish between problems that AI genuinely creates and problems that AI merely reveals? The line is not always clean. Both exist. But collapsing them into a single narrative of “AI is hurting us” misses the diagnostic value of the moment we are in.
-
John Willshire, “Cognitive Debt,” Smithery, 2025. ↩
-
3Gem Research, survey of 200 UK executives, reported in The Register, 2026. ↩
-
Igor Schwarzmann, “Don’t hate the player, hate the game,” THE NEW, January 2025. ↩
-
NOBL Collective, “AI Work Redesign,” Shop Notebook, accessed March 2026. ↩
-
Carlo Iacono, “What Do You Want From AI?,” Hybrid Horizons, March 2026. Based on Anthropic’s study of 81,000 participants. ↩