AI and Knowledge Work: A Layered Analysis
Something strange is happening to knowledge workers who use AI daily. They ship more than ever and feel worse than ever. The exhaustion arrives alongside the speed, not instead of it. They describe a fog that settles in after a day of working alongside these tools, a low-grade hum that is hard to name and harder to shake. The experience is intense and contradictory, and it has generated an enormous amount of discourse in a very short time: concepts, studies, frameworks, opinions. Most of it stays on the surface.
Sohail Inayatullah’s Causal Layered Analysis is a framework I find useful for exactly this situation. It asks you to move through four layers: from observable events (Litany) through the systems that produce them, down to the worldviews that shape those systems, and finally to the myths and metaphors that operate beneath conscious awareness. The point is not academic rigor. The point is to stop circling on one level and see what sits underneath.
I have been trying to apply this to the AI-and-knowledge-work discourse. The goal is less a finished analysis than a way to sort what we know and locate where the important questions actually live.
Litany: What it feels like
You are more productive and more depleted at the same time. The output numbers look great. The hours look reasonable. And still, at the end of the day, something is off. Engineer Siddhant Khare put it in personal terms: “I shipped more code last quarter than any quarter in my career. I also felt more drained than any quarter in my career.”1 That is not a study finding. That is what the experience actually feels like from the inside. The metrics say one thing. The body says another.
Your own capabilities are quietly rusting. A sentence you would have drafted yourself, delegated to a prompt. A calculation you would have run in your head, outsourced. The individual losses are small. They barely register. And somewhere in the accumulation, a quiet dread: what am I still capable of without this? The skills do not vanish overnight. They rust. And the rusting is easy to ignore because the output stays the same or gets better.
The fog after a day of AI work is not tiredness. Not the fatigue of a hard day’s work. Something more diffuse. A mental hum after eight hours of prompting, evaluating, correcting, re-prompting. BCG surveyed 1,488 workers and gave this experience a clinical name: “AI brain fry,” defined as mental fatigue from excessive use or oversight of AI tools beyond one’s cognitive capacity.2 The name is imprecise, but the recognition it triggered was not. People knew exactly what it described.
Quality debt accumulates in places you stop looking. Inbox summaries generated without reading. Research briefs condensed without checking the sources. Meeting notes filed without review. The pattern is not delegation. It is abdication of verification. Look what I can skip now. The celebration is genuine. The cost takes longer to arrive.
CLA’s first move is to notice when you are stuck on the Litany and look one layer down: not what does this feel like, but what produces the feeling.
Systems: The mechanisms underneath
The discourse has been trying to name the experience, and the names keep escalating. In under two years, five concepts have appeared: Cognitive Offloading, Cognitive Debt, Cognitive Drift, Cognitive Atrophy, Cognitive Surrender. Each more alarming than the last. Each diagnosing the individual mind.3 The progression from neutral description to capitulation tells its own story. The Cognitive Costs of AI maps this escalation in full. What matters here is that these are all attempts to put language around the raw experience described above, to move from feeling to mechanism. They sit at the systems level because they try to explain, not just describe.
Four mechanisms produce the intensity, and they compound. I have traced them in detail in AI and the Expansion of Work. The Jevons Paradox: when tasks get cheaper, you do more of them, not fewer. The Context-Switching Tax: touching six problems in a day is cognitively expensive in ways it is not for the machine. Brain fry: a distinct form of exhaustion from constant evaluation and oversight that coexists with reduced burnout. And the Expansion of the Possible: AI does not just accelerate existing work, it makes entirely new categories of work thinkable, which creates decision load before a single new task gets done. A UC Berkeley study tracked AI-augmented employees over eight months and confirmed the pattern: they worked faster, took on broader scope, and expanded into hours that had previously been free.4
Workflow redesign is the intervention that works, and almost nobody does it. The NOBL Collective’s research confirms this: workflow redesign has the strongest correlation with positive outcomes from AI adoption, but most companies skip it entirely.5 They deploy tools into existing structures and hope for the best. That tells you something about the structures: they were never designed for deliberate knowledge work in the first place. AI did not break them. AI made the dysfunction legible (the contrast agent argument).
Worldview: The assumptions we do not question
This is where it gets interesting to me, because these assumptions are rarely stated.
Productivity equals output. More produced means better work. Edward T. Hall described this decades ago as the “productivity trap”: a conveyor belt that accelerates and pulls you along, and acceleration is mistaken for progress.6 The entire measurement apparatus around knowledge work is built on this assumption: lines of code, emails sent, deliverables shipped, meetings attended. AI slots into this frame perfectly. It produces more. Therefore it must be working. The question of whether “more” is what we needed does not get asked.
The diagnosis defaults to the individual. All five cognitive terms diagnose you. You offload too much. You accumulate debt. Your skills atrophy. The prescription follows: be more disciplined, think harder, resist the shortcuts. But the conditions that produce these symptoms are organizational. Calendars stacked from 8 to 6, throughput rewarded over understanding, no structural room for deliberation. Individual diagnosis protects the system from examination. It is a remarkably persistent default.
Technology is supposed to make things easier. This one runs deep. The promise of every productivity tool, from the spreadsheet to the smartphone, has been relief. When AI does not deliver that relief, the instinct is to blame the implementation, the user, the deployment strategy. Anything except the premise itself. The possibility that adding a powerful tool to a system already running beyond capacity might intensify the pressure rather than relieve it does not fit the frame.
Augment versus replace is the wrong debate. Choudary argues this forcefully in Reshuffle.7 The entire discourse about whether AI will replace us or augment us is built on task-level thinking. AI does neither, in his framing. It reconfigures the system. He uses a Maginot Line analogy: we are defending positions in a war that has already changed shape. The question is not whether your job is safe. The question is whether the way work is organized will survive contact with a technology that makes coordination radically cheaper. But the deeper issue is why task-level thinking feels so natural in the first place. We inherited an ontology of work that breaks everything into discrete tasks, assigns them to roles, and measures performance by completion. That model was designed for manufacturing and imported into knowledge work without much revision. AI lands in this structure and gets evaluated by it: will it do this task faster, will it do that task better. The possibility that the technology changes what “a task” even means, that it dissolves the boundaries between roles, decisions, and coordination, does not fit a framework built on the assumption that work is a collection of separable units.
The contrast agent thesis sits at this level too. It is a worldview claim: what if AI is not the problem but the diagnostic instrument? That reframes the entire discourse. The discomfort we feel is not a side effect of the technology. It is information about the systems we built long before AI arrived.
Myth and metaphor: The stories underneath the stories
CLA’s deepest layer is the one that operates below conscious argument. These are the narratives that determine how we feel about what is happening, often before we have formed an opinion.
The fear of AI was pre-loaded before the technology arrived. Cultural Lead explains why. With most technologies, fear is a reaction to something new. With AI, centuries of stories about artificial beings, from the Golem through Frankenstein to Skynet to Samantha in Her, had already shaped how “AI” feels before the technology could do anything resembling intelligence. The emotional charge was in place long before ChatGPT launched. This is why the reaction to AI is so much more intense than the reaction to, say, cloud computing or mobile phones. The cultural preparation is incomparably deeper.
“Machines replace humans” is one of the most durable stories about technology. The power loom. Automation. The internet. Each time, the same fear attached itself to whatever was new. The narrative is broader than any single technology. With AI, the story hits harder because the cultural lead amplifies it, but the structure is ancient. It is why “AI is taking our jobs” lands immediately, even when the evidence is far more complicated.
Work equals moral worth, and that equation is centuries old. Max Weber’s analysis of the Protestant work ethic identified this as a foundational Western assumption.8 If work is virtue, then working less cannot be freedom. It must be failure. This helps explain why the Jevons Paradox is so persistent: even when AI could genuinely reduce workload, people fill the space with more work. Not because they are irrational, but because doing less feels morally dangerous. The acceleration comes from below, from a place that no productivity framework can reach.
All three narratives circle the same question. Cultural Lead asks: who are we, if machines can think like us? The replacement story asks: who am I, if my work can be done without me? The work ethic asks: who am I, if I am not working? The question underneath the AI discourse is not “What can this technology do?” It is “Who are we?” That is why the conversation is so charged. It is an identity crisis dressed up as a technology debate.
Reconstruction: What else could we build?
Inayatullah insists that CLA is not only deconstructive. The point of surfacing the layers is to ask: what would an alternative look like at each level?
Meaningmaking is the concept that, for me, connects across all four layers.
Meaningmaking offers a different measure: the quality of subjective value judgments. On the worldview level, this reframes what “good work” means. Vaughn Tan’s definition is precise: meaningmaking is the act of deciding that something is worth doing, that one thing is better than another, that a standard should be revised. These are the judgments that AI cannot make. They are also the judgments that brain fry specifically degrades: when cognitive resources are depleted by oversight and context-switching, the capacity for careful subjective evaluation is the first casualty.
The capacity AI degrades fastest is the one it cannot replicate. On the systems level, meaningmaking becomes a design principle. If the goal is to preserve the capacity for subjective judgment, then workflows need to be designed to protect it: structured differently, with deliberate room for the evaluative work that produces quality. The question is not whether to use more or less AI. It is how to organize work around AI so that the evaluative core remains intact.
The subjective layer is where the value lives. On the myth level, meaningmaking is a counter-narrative. There are capacities that only humans have, and those capacities are the core of what makes knowledge work valuable. The machines are not the threat. The threat is a conception of work that treats subjective judgment as overhead rather than as the point.
This is a direction, not a conclusion. What concretely follows from it, for individuals, for teams, for how organizations are designed, is a separate and much harder question. The layered analysis suggests where to start: at the worldview and myth levels where the assumptions that produce the Litany are formed.
What would it take to build work systems that treat meaningmaking as the thing to protect, rather than throughput as the thing to maximize?
Connections
The Cognitive Costs of AI maps the five concepts the discourse has produced and traces the escalation from neutral observation to capitulation. This note places that escalation on the systems level as attempts to name the raw experience.
AI as a Contrast Agent argues that AI reveals pre-existing problems rather than creating new ones. In CLA terms, that is a worldview-level claim: a reframing of what the AI discourse is actually showing us.
AI and the Expansion of Work traces the four mechanisms (Jevons, context-switching, brain fry, expansion of the possible) that sit on the systems level. The lived experience beneath the Litany.
Cultural Lag and Cultural Lead provides the myth-level analysis: with AI, the cultural images arrived before the technology, which explains the emotional intensity of the current moment.
Meaningmaking names the capacity that connects across all layers: the subjective value judgments that AI cannot replicate and that the current discourse risks burying under productivity metrics.
Causal Layered Analysis (CLA) is the framework itself. This note is an applied example, not a methodological guide.
Open Questions
If the most consequential assumptions operate at the worldview and myth levels, but most organizational decision-making happens at the systems level, how do you bridge that gap? Telling a leadership team “your productivity assumptions are culturally constructed” is not a strategy. What would a practical intervention at the worldview level look like inside an actual organization?
And a question about the reconstruction: meaningmaking as a design principle sounds promising in theory, but it collides with how most organizations measure success. Meaningmaking is slow, invisible, and resists quantification. Throughput is fast, visible, and easy to count. Under what conditions would an organization voluntarily shift from one to the other?
-
Siddhant Khare, “AI fatigue is real and nobody talks about it,” February 2026. ↩
-
Julie Bedard et al., “When Using AI Leads to Brain Fry,” Harvard Business Review, March 2026. ↩
-
The full taxonomy is mapped in The Cognitive Costs of AI. ↩
-
Aruna Ranganathan and Xingqi Maggie Ye, “AI Doesn’t Reduce Work, It Intensifies It,” Harvard Business Review, February 2026. ↩
-
NOBL Collective, “AI Work Redesign,” Shop Notebook, accessed March 2026. ↩
-
Edward T. Hall, The Dance of Life: The Other Dimension of Time (1983). ↩
-
Sangeet Paul Choudary, Reshuffle: The Future of Work and Organizations in the Age of AI (2026). ↩
-
Max Weber, The Protestant Ethic and the Spirit of Capitalism (1905). ↩
Linked References
No notes link to this note yet.