Decision Rights in 2026: Frameworks, Governance Tools, and the Layer That’s Missing
Organizations invest heavily in defining who decides what. Frameworks assign roles. Governance tools track authority. Decision logs capture outcomes. But none of them answer the question that actually matters: did the decision hold?
Decision rights is one of the most established concepts in organisational design. The premise is straightforward: when it’s clear who has the authority to make which decisions, organisations move faster and with less friction. When it’s unclear, decisions stall, get relitigated, or get made by whoever happens to be in the room.
The premise is correct. Clarity about decision authority is necessary. But it’s not sufficient. And the gap between the two is where most organisations lose more than they realise.
The decision rights landscape
Over the past two decades, the decision rights space has developed three distinct layers. The first assigns authority through frameworks. The second operationalises decision processes through governance tools. The third logs outcomes through capture tools. All three are useful. None of them are sufficient. And the gap between the third and reality is where decision failures accumulate.
Layer 1: Frameworks — defining who decides
RACI (Responsible, Accountable, Consulted, Informed) was one of the first decision rights frameworks to gain traction. It maps decisions across a matrix, assigning each role a letter. It’s intuitive and portable. Every organization understands what it means to be “accountable” for a decision.
DACI (Driver, Approver, Contributors, Informed), popularized by Atlassian, updated the framework for modern teams. DACI emphasizes driving to closure as a distinct role — not just being responsible, but moving the decision forward. It’s been adopted widely in tech organizations where velocity matters.
RAPID (Recommend, Agree, Perform, Input, Decide), developed by Bain, focuses on the sequence of decision-making. RAPID forces you to map who recommends, who agrees, who performs, who provides input, and who decides. It’s particularly useful for organizations wrestling with delegation and consensus.
OVIS (Own, Veto, Influence, Support), BCG’s more recent contribution, addresses the reality that in matrix organisations, decision authority is rarely binary. OVIS acknowledges that multiple stakeholders have legitimate but different levels of influence, and tries to make those gradations explicit.
SPADE (Setting, People, Alternatives, Decide, Explain) comes from the tech sector — developed at Square and adopted across Silicon Valley. Its differentiator is that it’s a process, not just a role assignment. It requires the decision-maker to document the setting (context and constraints), identify alternatives, make the call, and explain the reasoning.
These frameworks are genuinely useful. They reduce ambiguity. They make it possible to have a conversation about how a particular decision should be made before you’re in the middle of trying to make it. But they are frameworks — blueprints, not execution.
Layer 2: Governance tools — operationalising decision processes
Holaspirit (by Talkspirit) enables dynamic role mapping and transparent decision processes across 450+ organizations in 30 countries. It gives structure to how roles are assigned, decisions are tracked, and authority flows through organizations in real time.
Loomio originated in New Zealand’s civic movement as a tool for participatory democracy. It’s open source, GDPR-compliant, available in 30+ languages, and designed for groups that need to move from discussion to decision with genuine input. It operationalizes the participatory aspects of decision-making.
Murmur and similar tools implement the “advice process” — a lighter-weight decision-making model where anyone can make a decision as long as they’ve sought advice from those affected and those with expertise.
These governance tools solve a real problem: they operationalize decision processes so they’re not just ideas, but workflows. They log who was involved, what the timeline was, what the decision was. They create an artifact where before there was only conversation.
Layer 3: Capture tools — logging what was decided
Atlassian Confluence lets teams embed DACI templates in their decision documentation, creating a searchable archive of who decided what and when.
Fellow turns meeting notes into decision logs, capturing action items, owners, and due dates alongside the decisions those meetings produced.
Hypercontext (acquired and rebranded as Spinach AI) automatically processes meeting outcomes and logs them alongside the organizational structures that own different decisions.
Notion decision logs let organizations maintain single sources of truth for decisions: who decided, when, what changed, what the decision was based on.
These capture tools solve the documentation problem. They answer: “What did we decide?” They create accountability through visibility. And they are valuable.
The structural gap
But all three layers — frameworks, governance tools, and capture tools — have something in common. They instrument the authority structure, not the coordination quality.
DACI says Sarah is the Driver. It doesn’t show whether Sarah actually drove to closure, or whether she let the meeting get hijacked by whoever spoke loudest. It doesn’t show whether she was exercising judgment or deferring to whoever sounded most confident.
The governance tool records that consensus was reached with no objections. It doesn’t show whether the absence of objections means genuine agreement, or whether people disengaged because they’d already decided the outcome was predetermined.
The decision log records the choice. It doesn’t show whether the decision held through the next sprint, or whether it was quietly reversed in a Slack thread two days later because new information emerged, or because the decision owner lost confidence, or because the shallow execution problem meant nobody ever actually committed to it.
Where decisions actually fail
Research from the Collective Intelligence Labs at the Stockholm School of Economics studied teams across 22 organisations. The research found that the teams whose coordination goes unobserved degrade. And the teams most at risk overrate their coordination quality.
Is the Driver actually driving? Is the Approver exercising judgment, or rubber-stamping because they’re busy and trust the Driver has done their homework? Are the Contributors actually contributing, or are they sitting in meetings waiting for the decision they know has already been made? Are the people marked Consulted actually getting consulted, or is that a formality?
These are coordination quality questions. No framework answers them. No governance tool surfaces them. No decision log captures them.
The verification layer
This is the space that decision reliability infrastructure addresses. It instruments the coordination layer — making visible whether the decision rights architecture is actually functioning in practice.
Rather than asking “who is accountable for this decision?” it asks “are the people accountable for this decision actually exercising that accountability?” Rather than asking “was there consensus?” it asks “did the team genuinely integrate diverse perspectives, or did someone check the ‘consensus’ box?” Rather than asking “what did we decide?” it asks “what did we decide, and is that decision holding?”
This isn’t a replacement for decision rights frameworks. It’s what makes them trustworthy.
The AI governance dimension
The question becomes sharper when regulators enter the picture. The EU AI Act goes into enforcement in August 2026. CCPA ADMT regulations are expanding. These regulations require that human decision rights be genuinely exercised — not just formally assigned.
Alan Knox argues that regulators will increasingly distinguish between the checkpoint (the moment where a human has to approve something) and the decision owner (the person actually making the call). A DACI chart that says Sarah is the Approver satisfies the checkpoint requirement. But did Sarah actually decide, or did she just authorize what the Driver recommended?
Regulators won’t accept a DACI chart as evidence of genuine human oversight. They’ll want to see that the decision owner was actually present in the decision. That they integrated input. That they exercised judgment. That they were willing to override the recommendation if their judgment diverged.
The complete decision rights stack
Define → Operationalise → Capture → Verify
Define: Assign authority through frameworks (RACI, DACI, RAPID, OVIS, SPADE). Make it clear who decides what.
Operationalise: Use governance tools (Holaspirit, Loomio, Murmur) to make decision processes repeatable. Build them into workflows, not conversations.
Capture: Log decisions (Confluence, Fellow, Hypercontext, Notion). Create accountability through documentation.
Verify: Instrument the coordination layer. Show whether the authority structure is actually being exercised.
Without the fourth layer, the first three are aspirational. With it, they become trustworthy.
Frequently Asked Questions
What are decision rights frameworks?
Decision rights frameworks are tools for assigning and clarifying authority in organizations. RACI (Responsible, Accountable, Consulted, Informed), DACI (Driver, Approver, Contributors, Informed), RAPID (Recommend, Agree, Perform, Input, Decide), OVIS (Own, Veto, Influence, Support), and SPADE (Setting, People, Alternatives, Decide, Explain) are among the most widely adopted frameworks. Each provides a way to map decisions, define roles, and reduce ambiguity about who has decision authority.
What tools help track decision rights?
Governance tools operationalize decision processes and document outcomes. Holaspirit (used by 450+ organizations across 30 countries) enables transparent role mapping and decision tracking. Loomio (originated from New Zealand’s civic movement, open source, GDPR-compliant, available in 30+ languages) facilitates collaborative decision-making. Tools like Atlassian Confluence, Fellow, Hypercontext, and Notion help capture and log decisions. However, these tools operationalize and document decisions but don’t verify whether the coordination quality behind those decisions is actually sound.
Why do decision rights frameworks fail?
Decision rights frameworks define authority structure, but there’s a critical gap between assigned authority and exercised authority. The Driver doesn’t actually drive to closure. The Approver rubber-stamps without exercising judgment. Contributors marked for consultation are never meaningfully consulted. This shallow execution problem — where roles are assigned on paper but not actually performed — is where most organizational decision failures occur.
What is decision reliability infrastructure?
Decision reliability infrastructure instruments the coordination layer to show whether decision rights are actually being exercised in practice, not just formally assigned. Rather than replacing frameworks, tools, or capture processes, it makes visible the gap between documented decision authority and actual coordination quality — showing whether teams are genuinely integrating knowledge and exercising the authority assigned to them.
How do decision rights relate to AI governance?
As AI governance regulations expand — including EU AI Act enforcement (August 2026) and CCPA ADMT requirements — regulators require evidence that human decision rights are genuinely exercised, not just formally assigned. A DACI chart alone is insufficient for compliance. Organizations must be able to verify that the decision owner is actually making decisions, checkpoints are being observed, and human oversight is genuine. This makes decision reliability infrastructure a compliance requirement, not just an operational preference.
Related Articles
Beyond DACI, RAPID, and SPADE
How decision reliability infrastructure completes the stack.
ResearchYour Most Confident Teams Are Your Biggest Risk
Why coordination erodes without structured reflection.
PatternsWhy Teams Keep Reopening Decisions
The structural patterns behind decision instability.