AI and the Expansion of Work

“I shipped more code last quarter than any quarter in my career. I also felt more drained than any quarter in my career.” When engineer Siddhant Khare wrote this in early 2026, the responses were immediate and recognizable.1 Not because software engineers are a uniquely stressed group, but because the experience he described has become universal across knowledge work: more output and more exhaustion, at the same time.

This is not what we were promised. The pitch for AI in the workplace was relief: fewer routine tasks, more time for meaningful work, a lighter cognitive load. For some tasks, that has happened. But the overall picture looks different. A UC Berkeley study that tracked AI-augmented employees over eight months found that they worked faster and took on a broader scope of tasks, often extending into hours that had previously been free.2 They felt more productive. They did not feel less busy.

Four mechanisms explain why.

1. The Jevons Paradox

In 1865, economist William Stanley Jevons observed that as coal-burning engines became more efficient, coal consumption went up, not down. Efficiency made coal cheaper to use, so people used more of it. The pattern is called the Jevons Paradox, and it maps cleanly onto what is happening with AI and knowledge work.

When AI reduces the friction of starting a task (no more blank page, no more unknown starting point), people start more tasks. When a research brief that took a day now takes an hour, the response is rarely to take the afternoon off. It is to do four more briefs. Khare puts it simply: “When each task takes less time, you don’t do fewer tasks. You do more tasks. Your capacity appears to expand, so the work expands to fill it. And then some.”

The HBR study confirms this at scale. Workers slipped small amounts of work into moments that had previously been breaks. On their own initiative, they did more because AI made “doing more” feel possible and, in many cases, intrinsically rewarding. The researchers call this “silent workload creep”: the extra effort is voluntary, often enjoyable, and invisible to managers. The friction that might naturally have capped this cycle is disappearing.

2. The Context-Switching Tax

Before AI, Khare describes spending a full day on one design problem. “I’d sketch on paper, think in the shower, go for a walk, come back with clarity. The pace was slow but the cognitive load was manageable: one problem, one day.”

Now he touches six problems in a day. Each one “only takes an hour with AI.” But context-switching between six problems is expensive for the human brain in a way it is not for the machine. The AI does not get tired between problems. The human does.

There is a subtler cost here as well. Every AI output requires evaluation. Is this correct? Relevant? Hallucinated? Khare identifies the core tension: “You are collaborating with a probabilistic system, and your brain is wired for deterministic ones.” That mismatch is a constant, low-grade source of cognitive work that does not show up in any productivity metric. You are not just doing more. You are also judging more, in smaller increments, across more contexts, all day long.

3. Brain Fry

A 2026 BCG study of 1,488 workers gave this experience a name: “AI brain fry,” defined as mental fatigue from excessive use or oversight of AI tools beyond one’s cognitive capacity.3 Participants described a “buzzing” feeling, a mental fog, difficulty focusing, slower decision-making, headaches. One senior engineering manager captured it: “I was working harder to manage the tools than to actually solve the problem.”

The study’s most interesting finding is a distinction that explains the paradox of feeling better and worse at the same time. When AI replaces repetitive tasks, burnout scores go down (15% lower). But when AI requires intensive oversight, a different kind of strain appears. Burnout is emotional exhaustion. Brain fry is cognitive exhaustion. They are not the same thing, and AI affects them in opposite directions. A person can experience less burnout (because the boring parts are gone) and more brain fry (because the remaining work demands unbroken evaluation and judgment) simultaneously.

This connects directly to the Cognitive Costs of AI. The escalation from Cognitive Offloading to Cognitive Surrender describes a trajectory. Brain fry is what that trajectory feels like from the inside: the acute experience of a cognitive system running beyond its capacity. The BCG data shows it is not a metaphor. Participants who reported brain fry experienced 33% more decision fatigue, 39% more major errors, and were 39% more likely to intend to quit their jobs.

The study also found a ceiling. Productivity increased as workers went from one AI tool to two, rose more slowly with three, and dropped after that. The human cognitive system has limits, and stacking more agents does not override them.

4. The Expansion of the Possible

The first three mechanisms describe intensification of existing work: more of it, faster switching, harder oversight. There is a fourth dynamic that I notice in my own work and in conversations with others who use AI extensively.

AI does not just make existing tasks faster. It makes entirely new categories of work thinkable. Research that would have required a team and a week can now be done in an afternoon. A prototype that would have stayed in the “someday” folder becomes a Tuesday afternoon project. The scope of what one person can attempt has expanded dramatically.

This sounds like liberation, and sometimes it is. But it creates a new kind of load: decision load. Barry Schwartz documented this decades ago in his work on the paradox of choice: more options do not make choosing easier, they make it harder and less satisfying. Every new possibility requires a decision. Do I pursue this or not? Is this worth my time now that it is feasible? A consultant I spoke with recently described it as “drowning in feasibility”: she now sees ten possible analyses where she used to see two, and spends more energy deciding which to run than actually running them. The Möglichkeitenraum (the space of possibilities) has grown faster than anyone’s capacity to navigate it.

The Exponential View put it concisely: “The tools expand the scope of work faster than they compress it.”4 Jevons is about doing more of the same. This is about the same person suddenly facing a larger world of potential work, all of it now plausible, all of it competing for attention and judgment.

What accumulates

These four mechanisms compound. The result is a new kind of work experience: productive by every external metric, and exhausting by every internal one. The workers themselves struggle to articulate why they are tired, because they know they are getting more done.

Khare captures it: “The real skill of the AI era is knowing when to stop.”5 That sounds like individual advice, and it is. But it connects to a larger question. If the systems we work in reward throughput, measure token consumption, and celebrate productivity gains without asking about cognitive cost, then “knowing when to stop” requires swimming against the current. The contrast agent is at work again here: AI did not invent the pressure to produce. It accelerated it to the point where the human cost became impossible to ignore.

Connections

The Cognitive Costs of AI maps the terminology the discourse has produced. This note describes the lived experience beneath those terms: the daily mechanisms through which work expands and cognition strains.

AI as a Contrast Agent argues that AI reveals pre-existing systemic problems. The expansion of work is one of the clearest examples: the pressure was already there, AI made the shortcuts frictionless, and the resulting intensity made the pressure visible.

Meaningmaking names the capacity that brain fry specifically degrades. When cognitive resources are depleted by oversight and context-switching, the subjective judgments that constitute meaningmaking are the first to suffer.

Open Questions

The BCG study found that organized team integration of AI reduces mental strain, while individual ad-hoc adoption increases it. Is there a way to design AI workflows that preserve the Jevons benefits (more output per unit of effort) without triggering the brain fry (cognitive overload from oversight)?

And a question I keep returning to: if the expansion of the possible is permanent (AI will keep making more things feasible), do we need a fundamentally different relationship with possibility itself? Not productivity advice, but a deeper reckoning with the fact that being able to do something is not a reason to do it.

  1. Siddhant Khare, “AI fatigue is real and nobody talks about it,” February 2026. 

  2. Aruna Ranganathan and Xingqi Maggie Ye, “AI Doesn’t Reduce Work, It Intensifies It,” Harvard Business Review, February 2026. 

  3. Julie Bedard et al., “When Using AI Leads to Brain Fry,” Harvard Business Review, March 2026. Study of 1,488 full-time U.S. workers. 

  4. Azeem Azhar, “The AI Productivity Paradox,” Exponential View

  5. Khare, “AI fatigue is real.” 


Note Graph

ESC