GROWTH|WISE
Opinion

The KPI Level Nobody Measures: Operational Governance in PE-Backed Scale-Ups

Scale-ups build KPI systems that track whether the business is growing, acquiring efficiently, and operating sustainably. None of those systems track whether the cross-functional decisions connecting all three are actually working.

By Growth Wise Research Team 7 min read

A recent Vedrai Observatory paper on European tech scale-ups makes a sharp argument: the top line alone never tells the whole story. PE- and VC-backed companies that grow without understanding the causes of that growth build fragility into the business. The paper proposes three connected KPI levels (top line, commercial, and operational/financial) and argues that these need to function as a causal system, not as separate dashboards maintained by separate functions.

That framework stops one level short.

The three levels and what they actually depend on

The KPI framework most PE-backed scale-ups operate with looks roughly like this: the top-line layer tracks ARR, NRR, GRR, and expansion revenue. The commercial layer tracks CAC, payback period, LTV:CAC, and pipeline coverage. The operational/financial layer tracks burn multiple, ARR per employee, Rule of 40, and gross margin. When these three levels are connected through a causal model, leadership can trace how a change in sales cycle length affects CAC, how CAC affects burn, and how burn trajectory affects the exit narrative three years out.

This is useful. But look at what it assumes. It assumes that when the causal model surfaces a problem (say, GRR declining because acquisition quality dropped) someone will close the cross-functional decision needed to fix it. Specifically: Sales and Customer Success will sit in a room, agree on new qualification criteria, assign an owner to implement the change, and that agreement will hold long enough to change the acquisition cohort downstream.

That decision is where the model breaks. The causal KPI tree can diagnose the problem beautifully. The simulation can project what happens if churn continues at current rates. But the actual intervention — the cross-functional decision between two teams with competing incentives — happens in a meeting. And nobody is measuring whether that meeting produced a real agreement or a polite approximation of one.

Three patterns that the standard KPI system catches too late

The Vedrai paper describes three recurring problems in scale-ups, and each one is a coordination failure before it shows up in the financial KPIs.

The first: GRR declines from 89% to 83% over three quarters, silently, while the top line grows at 40% YoY. The board is satisfied. The cause was a decision made (or not made) nine months earlier, when Sales lowered qualification thresholds to hit new logo targets. Customer Success inherited the resulting cohort. Nobody connected the two conversations. By the time GRR shows up in the top-line KPIs, the damage has been compounding for three quarters.

The second: the fund invested with a thesis projecting $80 million ARR at exit. The company is at $18 million, growing at 35% annually. The CAGR gets them to $65 million — 19% below target. Management has not built the simulation showing what must change to close the gap. But even if they build the simulation, the resulting decisions (hire five AEs, shift market segment, adjust pricing) each require alignment across multiple functions. Those alignment decisions have to close in rooms where people disagree about priorities, and the closures have to propagate cleanly into execution.

The third: burn multiple deteriorates from 1.8x to 2.7x over six quarters. Finance sees the burn. Commercial sees the sales costs. Operations sees the delivery costs. Nobody has the integrated view. The Vedrai paper correctly says the fix is a single model connecting revenue, costs, and cash. But the decisions that feed that model, the trade-offs negotiated between functions every week, are where the governance actually happens. And those are invisible to every KPI in the standard framework.

In all three cases, the financial KPIs catch the problem after it has compounded. The governance layer, where cross-functional decisions get made, held, or broken, is where the problem forms. And the standard KPI system has no instrumentation for it.

The missing fourth level

If you were to add a governance row to the standard KPI table, it would need to answer four questions. Is the cross-functional meeting producing real coordination, or are people leaving with different interpretations of what happened? Are the decisions closing with enough structural completeness — explicit statement, named owner, surfaced dissent, rationale — to actually hold? Are the delegated actions leaving the room with enough specificity to execute cleanly? And are old items returning because they were parked without resolution and have now compounded into blockers?

Those four questions map to four KPIs:

Level Key KPI What it measures
Top Line ARR / NRR / GRR Revenue growth, retention, expansion
Commercial CAC / LTV:CAC / Payback Acquisition efficiency, unit economics
Operational / Financial Burn Multiple / ARR per Employee / Rule of 40 Operating efficiency, scalability
Governance Coordination Quality Per-meeting composite score across five dimensions: safety, balance, reliability, focus, and process clarity. Arena-relative — what counts as good depends on the meeting type. Measures whether the coordination forum itself is functioning.
Governance Decision Reliability Whether decisions close with structural completeness: explicit statement, named owner, surfaced dissent, captured rationale. Incomplete closures count at reduced weight because they tend to reopen under pressure.
Governance Delegation Flow Probability that delegated actions will execute cleanly. Calculated from two inputs: closure signal completeness (owner, next step, deadline) and arena fit (whether the delegation was produced in a meeting context appropriate for clean handoffs).
Governance Coordination Debt Topics resurfacing from prior meetings without resolution. These are not new agenda items — they are old ones returning because they were parked without a next step, escalated without a carrier, or partially closed without follow-through.

Why these four and not others

These four KPIs are not satisfaction metrics or engagement scores. They measure structural properties of the decision process — the same properties that determine whether the other three KPI levels will function as an integrated system or fragment back into silos.

Coordination Quality is the top-line health score for the governance layer itself. It composites five dimensions: whether people are being heard (safety and clarity), whether the right voices are contributing relative to what the meeting type requires (balance), whether decisions are landing (reliability), whether the group is staying on the work it chose to do (focus), and whether the group knows how it's deciding (process clarity). A meeting can feel productive and still score poorly on structural coordination. The score surfaces the gap between perceived quality and actual coordination outcomes.

Decision Reliability measures closure quality at the decision level. A decision that leaves a room without an explicit statement of what was decided, a named owner, and captured rationale is structurally incomplete. It may hold for a week, but when a dependency shifts or a new stakeholder arrives, it will likely need to be reconstructed. In the GRR pattern above, the decision about qualification criteria probably happened — but without enough structural completeness to survive the next quarter's new logo push.

Delegation Flow measures the handoff. A decision can close cleanly in the room and still fail to execute because the delegation was vague — no specific next step, no named owner on the receiving team, no deadline. The fund target pattern depends on this: even if leadership closes the decision to hire five AEs in the enterprise segment, the delegation needs to propagate with enough specificity that recruiting, finance, and the commercial team all know what they own and by when.

Coordination Debt is the compounding signal. It counts the topics that keep coming back because they were never structurally resolved. In a healthy governance process, parked items get routed to the right forum with a next step and a deadline. In an unhealthy one, they get deferred and forgotten until they resurface — usually in a less appropriate context, further from the original problem, and consuming more time than they would have if resolved the first time. When coordination debt rises in the forums where cross-functional decisions happen, the decisions feeding the KPI tree are starting to fail structurally.

What the governance layer changes about integrated management

The Vedrai paper argues that scale-ups need to break the silos — that integrated end-to-end management means decisions made by one function must be visible and coordinated with upstream and downstream functions. The causal KPI tree and simulation model are the right infrastructure for making the financial and commercial relationships visible. What they cannot do is make the coordination process itself visible.

When a CFO builds the model connecting revenue, costs, and cash into a single simulation, that model tells leadership what to do. The governance layer tells them whether the organization can actually execute it — whether the cross-functional decisions needed to change course are closing with enough structural integrity to hold through the next quarter, or whether they're producing the appearance of alignment that dissolves the first time a priority shifts.

The 100-day plan that PE firms install at acquisition typically establishes governance, installs the KPI system, and launches value creation levers. The governance it installs is usually reporting governance: cadence, format, escalation paths to the fund. Rarely does it instrument the quality of the decisions being made inside those cadences. That gap is how you get a company with excellent reporting discipline that still discovers structural problems too late — because the dashboards were current, but the decisions behind the dashboards were incomplete.

Adding the governance level to the KPI system changes what leadership can see. A declining Coordination Quality score in the weekly cross-functional sync is a leading indicator — it surfaces the coordination failure while there is still time to intervene, not after three quarters of GRR erosion have already compiled into a board-level problem. Rising Coordination Debt in the forums where resource allocation decisions happen tells you that trade-offs are being deferred, not resolved. A low Delegation Flow after the quarterly planning session tells you that the strategic decisions made in that room have a structural risk of stalling before they reach the teams that need to execute them.

The three-level KPI system tells you the business is healthy or sick. The simulation tells you where it's heading. The governance layer tells you whether the decisions being made right now are structurally sound enough to get you there.

Common questions

What is the governance gap in scale-up KPI systems?

Most PE- and VC-backed scale-ups build three-level KPI systems: top line, commercial, and operational/financial. All three depend on cross-functional decisions to function as an integrated system. But no KPI in the standard framework measures whether those decisions are actually closing, holding, and propagating. That is the governance gap — the missing instrumentation for the layer where coordination either works or breaks.

Why can't existing KPI systems catch coordination failures?

Existing KPIs measure the outputs of coordination — revenue, retention, efficiency — not the coordination process itself. A declining GRR shows up months after the cross-functional decision between Sales and CS failed to close. By the time the financial KPIs reflect a coordination failure, the compounding has already happened.

What KPIs measure operational governance?

Four: Coordination Quality (per-meeting composite across five structural dimensions), Decision Reliability (structural completeness of closures), Delegation Flow (probability that delegated actions will execute cleanly), and Coordination Debt (topics resurfacing from prior meetings without resolution).

Sources

Vedrai Observatory, "Growth Is Not Enough. Governing Is Harder Than Scaling," Vedrai S.p.A. Research & Intelligence, February 2026.

Sam Kaner, Facilitator's Guide to Participatory Decision-Making, Jossey-Bass, 2007.

Dave Snowden and Mary Boone, "A Leader's Framework for Decision Making," Harvard Business Review, November 2007.

Stay close to the research

New articles on coordination dynamics, decision reliability, and the science of how teams actually work.

Subscribe to our newsletter