GROWTH|WISE
Opinion

Who Owns the Decision When Nobody Decides Alone?

Alan Knox argues that humans must own decisions in an AI-first world. He’s right. But most organizational decisions aren’t made by one person — and that changes the problem entirely.

By Vanessa Meyer 9 min read

Alan Knox recently published a piece called Human Roles in an AI-First World that I think deserves serious attention. His core argument: AI can make decisions, but it can’t answer for them. Accountability is a relationship between people, and no amount of capability changes that. Someone has to own the decision.

I agree with almost all of it. And I think there’s one dimension he doesn’t address that changes how the framework works in practice.

What Knox gets right

Knox reframes the “human in the loop” conversation in a way I find genuinely useful. A checkpoint is optional — if the AI is usually right, efficiency says skip it. A decision owner is structural — someone who directs, judges, controls, and answers for outcomes regardless of how much of the work AI did.

His four roles are well-drawn. Direction — what are we trying to accomplish and why? Judgment — what does this specific situation call for? Control — where does AI operate and where does it stop? Responsibility — who answers when things go wrong?

And his “muscle atrophies” warning is one that anyone leading teams should take seriously. The longer humans don’t exercise judgment, the less capable they become of exercising it. We don’t just lose function. We lose the capacity for function.

This is a framework worth building on. So let me build on it.

The gap: decisions aren’t made alone

Knox frames accountability around a single decision owner. One person who holds direction, exercises judgment, sets control boundaries, and answers for outcomes.

But step inside any organization and ask: who made that decision?

In my experience, the answer is almost never one person. It’s a product lead and an engineering manager who talked through the tradeoffs. It’s a cross-functional meeting where three teams negotiated scope. It’s a series of conversations where ownership was implied but never explicitly stated, where objections were heard but not resolved, where agreement was assumed because nobody pushed back.

Knox’s four roles don’t disappear in this context. They get distributed. Direction comes from leadership but gets interpreted by the team. Judgment gets exercised by whoever happens to speak up. Control boundaries get set in principle and eroded in practice. Responsibility attaches to a name on a DACI chart that may or may not reflect who actually drove the decision.

The moment accountability is distributed across a group, a new problem emerges: how do you know these roles are actually being held?

You can’t self-assess your way out of this

This is where I want to bring in something that sharpens the problem considerably.

Research from the Collective Intelligence Labs at the Stockholm School of Economics studied 50 knowledge-intensive teams across 22 organizations. Among the findings, one stands out in relation to Knox’s framework.

A cluster of teams the researchers called the “second best” — performing just below the top tier, producing decent output — had a specific and dangerous profile. They had weaker understanding of their task than they realized. Lower integration behaviors — less expertise surfacing, less challenging of each other’s thinking. Lower psychological safety. And they consistently overrated their own performance relative to outside observers.

These teams believed they were exercising judgment. They believed they were holding accountability. In Knox’s terms, they would have said they were owning the decision. The data said otherwise.

When a structured intervention tried to make their coordination visible, they didn’t improve. They resisted. One team member wrote: “My strongest impression is that we as a team already are very good at many of the things this study wants to demonstrate, which is positive!” — a statement that perfectly captures the blind spot. The confidence was unfounded. And it was unfounded in exactly the way that Knox’s framework, applied at the individual level, cannot detect.

If you ask someone “are you holding direction?” they’ll say yes. If you ask “are you exercising judgment?” of course. “Are you taking responsibility?” always. But the research shows that teams who answer yes to all of these can still be coordinating poorly — and they can’t tell the difference from the inside.

Individual accountability needs collective infrastructure

Knox warns about the “Dead Internet Theory” — activity without purpose, production without presence. He’s worried about a world where nobody’s home.

I think the same thing can happen inside an organization, and it doesn’t require AI to cause it.

Teams can hold meetings where direction is discussed but never confirmed. Judgment can be exercised by individuals who never surface it to the group. Control boundaries can exist on paper while scope creeps in every conversation. Responsibility can be assigned to a name that everyone sees but nobody enforces.

Knox’s four roles are the right roles. But at the organizational level, they need infrastructure to be real. Not just someone designated as the owner, but something that shows whether ownership is actually being exercised in how the team coordinates.

Is the person who holds direction actually driving the conversation to closure? Is judgment being integrated from the people who have relevant expertise, or is the loudest voice carrying the room? Are control boundaries being maintained, or are they eroding meeting by meeting? Is the responsible party actually answering for outcomes, or just signing off?

These aren’t questions you can answer from a DACI chart. They’re questions about what’s happening in the coordination layer — the space between individual accountability and collective action.

The instrumentation layer Knox doesn’t mention

Knox writes that human ownership “operates at the system level.” I agree. But he describes that system-level ownership in terms of design, monitoring, boundaries, and answering for outcomes. These are all things a person does.

What’s missing is the mechanism that shows whether they’re doing them.

This is the problem we’re working on at Growth Wise. We call it decision reliability infrastructure — instrumenting the coordination layer so that the structural quality of how teams decide together becomes observable. Not replacing Knox’s four roles, but making it possible to verify they’re being held.

When direction is being held, you can see it in how decisions close with clear purpose and scope. When judgment is being exercised, you can see it in whether diverse expertise actually enters the conversation. When control is maintained, you can see it in boundaries that hold across meetings rather than eroding. When responsibility is real, you can see it in follow-through that matches commitment.

Without this visibility, Knox’s framework is aspirational. With it, it becomes operational.

The atrophy is already happening

Knox worries that the muscle atrophies — that humans who don’t exercise judgment lose the capacity for it. He frames this as a future risk driven by AI.

The research suggests the atrophy is already happening, and AI isn’t the cause. Teams whose coordination goes unobserved lose integration behaviors over time. The control teams in the Stockholm study didn’t face AI displacement. They just kept working together without structured reflection. And they got measurably worse.

The muscle atrophies not because AI took over, but because nobody was watching whether the muscle was being used. Knox is right that presence matters. But presence without instrumentation is a promise without verification.

Someone has to own the decision. Knox is right about that. But in organizations, owning the decision means owning how the team coordinates — and that requires more than individual resolve. It requires something that makes coordination visible.

Frequently Asked Questions

What is the difference between “human in the loop” and “decision owner”?

As Alan Knox argues, “human in the loop” positions people as checkpoints who approve AI outputs — a role that feels optional when AI is usually right. A “decision owner” holds the decision itself: they direct, judge, control, and answer for outcomes. The shift changes human involvement from quality control to structural accountability.

Who owns decisions when they are made by teams?

Most consequential organizational decisions are collective — made by cross-functional groups where direction, judgment, and responsibility are distributed across multiple people. While frameworks like DACI assign ownership roles, research shows that assigned accountability and actual coordination behavior often diverge. Teams can believe they are exercising judgment and holding accountability while their coordination tells a different story.

Can teams accurately assess their own coordination quality?

Research from the Collective Intelligence Labs at Stockholm School of Economics found that teams most at risk of coordination failure consistently overrated their own performance. These “second best” teams rated themselves higher than outside observers by a persistent margin that did not change even after intervention. Self-assessment is unreliable for the teams that need it most.

How does decision reliability infrastructure support accountability?

Decision reliability infrastructure instruments the coordination layer to make it observable whether accountability roles are actually being held in practice. It shows whether direction is driving decisions to closure, whether judgment from diverse expertise is entering conversations, whether control boundaries hold across meetings, and whether responsibility translates to follow-through — moving accountability from aspiration to verifiable practice.

Stay close to the research

New articles on coordination dynamics, decision reliability, and the science of how teams actually work.

Subscribe to our newsletter