When Doing Costs Nothing, Deciding Costs Everything
AI is collapsing execution costs. The Jevons Paradox predicts what comes next: more output, more decisions, more coordination load. The bottleneck moves upstream to the humans who must judge, prioritize, and choose.
In 1865, the economist William Stanley Jevons noticed something counterintuitive about steam engines: as they became more efficient at burning coal, total coal consumption went up, not down. The cheaper a resource is to use, the more of it gets consumed. Efficiency does not reduce demand. It unlocks it.
AI is doing this to cognitive work right now. And the consequences are showing up in a place no one is watching closely enough: the coordination layer.
The efficiency illusion
The prevailing narrative about AI in the workplace is a productivity story. Drafts that took hours now take minutes. Analysis that required a team now requires a prompt. Reports, code, decks, summaries. The execution layer of knowledge work is collapsing toward zero marginal cost.
This is real. The cost reductions are not trivial. McKinsey estimates 12 to 27 percent in total project costs in some industries, far beyond simple labor substitution. And the speed gains are genuine: tasks that took 120 minutes now take 24. The execution frontier is expanding in every direction.
But the Jevons Paradox predicts what happens next. When doing becomes cheap, organizations do not do less. They do more. Dramatically more. When drafting a document takes five minutes instead of fifty, you do not produce one document and recover the time. You produce ten. When analysis costs nothing, you do not run one. You run twenty. When the cost of trying approaches zero, the number of experiments, initiatives, and parallel bets expands in every direction.
The cost of doing collapses. The volume of what gets done explodes. And every new output creates a decision that did not exist before.
The deciding tax
Every generated document, analysis, or prototype requires human judgment. Is this correct? Should we ship it? Does it align with what we agreed last week? Who owns this now? Which of these twenty analyses should we act on?
The core shift
AI does not reduce cognitive load. It shifts it: from execution to evaluation, from producing to deciding. The bottleneck migrates upstream to the humans who must judge, verify, prioritize, and choose among an expanding volume of machine-generated output.
A Harness-Uplevel study of engineering teams found that developers using AI coding tools complete 21% more tasks, but their code review times increase by 91%. The work did not disappear. It moved to the approval layer. Senior engineers now spend more time vetting AI-generated code for security risks, architectural coherence, and technical debt than they spent writing the code it replaced.
The workday does not get shorter. It gets denser. And the natural pauses in knowledge work (formatting a spreadsheet, composing a routine email, searching for a file) were also cognitive resets. When AI eliminates the pauses, what remains is an unbroken sequence of consequential decisions.
Doing costs less. Deciding costs more. And the gap is widening.
Coordination entropy
More execution does not just create more decisions. It creates more coordination needs. More parallel initiatives require more dependencies to track, more ownership to clarify, more alignment to maintain across teams and channels.
And coordination has a natural tendency toward disorder. I call this coordination entropy, the drift from clarity to ambiguity in collective decision-making.
It is what happens when agreements go unrestated. When ownership is implied but never spoken. When objections stay silent. When context shifts between conversations and no one marks the change. When memory degrades across time, across channels, across people.
No one decides to create confusion. It accumulates.
Coordination entropy exists in every organization. But when the volume of work is low and the pace is slow, the damage is manageable. Small ambiguities have time to get caught and corrected before they compound.
AI removes that buffer. When execution is fast and volume is high, small ambiguity scales faster than it ever could before. A vague decision that might have affected one project now affects twelve. An implicit ownership assumption that would have surfaced in a week now silently derails a sprint. A reopened commitment multiplies across parallel workstreams.
The emergent coordination complexity is predictable: more reopened decisions, more ambiguous ownership, more hidden dissent, more premature closure, more escalations, more meetings-after-the-meeting, more side-channel governance. Each symptom is coordination entropy scaling faster than the organization’s structure can absorb it.
Structural complexity is inevitable. Coordination entropy is optional.
Where deciding actually happens
And where does most of this coordination happen? In meetings.
This is the uncomfortable truth about the AI-efficiency narrative. The more AI handles execution, the more consequential meetings become. Meetings are the thing AI makes matter more.
But most meetings are not structured for deciding. They are structured for sharing: status updates, slide decks, information transfers. The architecture assumes the hard part is getting everyone informed. In a world where execution is abundant and judgment is scarce, the hard part is getting everyone aligned on what to do with what AI produces.
This is why distinguishing between meeting types matters structurally. A status sync has different coordination requirements than a decision forum. When organizations fail to make that distinction, every meeting defaults to information-sharing, and the deciding either does not happen, or happens informally afterward, without the structure to make it stick.
Closure quality becomes the measurable output. Not “did we meet?” but “did we decide, and will the decision hold?” Four signals (a decision, an owner, a next step, a deadline) separate meetings that produce coordination from meetings that produce more meetings.
The organizations that treat this as a design problem, one that is structural, measurable, and improvable, will absorb the coordination load that AI creates. The ones that treat meetings as necessary overhead will drown in output they can produce but cannot evaluate.
The real shift
AI is making a specific kind of work harder. The kind that happens between people, in real time, under ambiguity. The kind that requires human judgment about what matters, what aligns, and what to do next.
The cost of doing is collapsing. The cost of deciding is just getting started.
Summary
The Jevons Paradox predicts that when AI makes execution cheap, organizations do dramatically more, not less. Every new output generates decisions, dependencies, and coordination needs that did not exist before. Coordination entropy, the natural drift from clarity to ambiguity, scales faster when execution volume is high. The bottleneck of modern knowledge work is no longer producing. It is deciding. And meetings, where most deciding happens, are rarely structured for the load AI is about to place on them. Structural complexity is inevitable. Coordination entropy is optional, but only for organizations that design for it.
Frequently Asked Questions
What is the Jevons Paradox and how does it apply to AI?
The Jevons Paradox is the economic observation that when a resource becomes more efficient to use, total consumption increases rather than decreases. Applied to AI: as execution costs collapse, organizations produce dramatically more output (more documents, analyses, experiments, and initiatives), which creates an expanding volume of decisions that did not exist before.
What is coordination entropy?
Coordination entropy is the natural drift from clarity to ambiguity in collective decision-making. It occurs when agreements go unrestated, ownership remains implicit, objections stay silent, and context shifts without acknowledgment. No one decides to create confusion. It accumulates. AI amplifies coordination entropy by increasing the volume and speed of execution, causing small ambiguities to scale faster than organizations can correct them.
Why does AI make meetings more important, not less?
AI handles execution, but coordination, the work of aligning people on what to do with what AI produces, still requires human judgment. Meetings are where that coordination happens. The more AI handles execution, the more consequential meetings become. The challenge is that most meetings are structured for information-sharing, not for the deciding that AI makes harder.
How does AI increase decision-making costs?
Every AI-generated output requires human judgment: Is this correct? Should we ship it? Does it align with previous decisions? Who owns it? Evidence shows teams using AI tools complete more tasks but spend dramatically more time on review and approval. The bottleneck migrates from execution to evaluation, and the workday becomes denser rather than shorter.
What is the difference between structural complexity and coordination entropy?
Structural complexity (more initiatives, more dependencies, more parallel workstreams) is an inevitable consequence of AI-driven efficiency. Coordination entropy, the drift toward ambiguity in how teams decide, own, and track, is optional. Organizations can design coordination structures that absorb complexity without generating entropy, through explicit closure signals, Arena classification, and measurable coordination quality.