The per-meeting composite score that measures how well a group turned discussion into durable, aligned outcomes — relative to the type of work they convened to do.
Direct Answer
Coordination Quality is Growth Wise's top-line per-meeting score. It measures how well a meeting's coordination worked across five dimensions: Safety & Clarity (are people being heard?), Balance (is the right person talking?), Reliability (are decisions actually landing?), Focus (is the group doing the work it chose to do?), and Process Clarity (does the group know how it's deciding?). What counts as "good" is arena-relative — a Status Update has different coordination expectations than an Ideation session. The score reflects structural quality, not effort or engagement.
Every meeting convenes with a purpose — Planning, Problem-Solving, Status Update, Ideation, and others. Each arena sets different expectations for what "good" coordination looks like. A Status Update expects the leader to carry the conversation; that same behaviour in an Ideation session suppresses the divergent thinking the arena requires. A Planning meeting should close scoped, assigned decisions; producing open possibilities would be misaligned.
This arena-relative design draws directly from the Cynefin framework — developed by David Snowden in his 1999 work Liberating Knowledge — and its concept of bounded applicability, the idea that no management method is context-agnostic. Cynefin uses conversational dominance as a diagnostic signal for exactly this reason: a single voice dominating a Complicated problem-solving session prevents the expert analysis the situation demands, while appropriate leader authority in a Clear administrative update is structurally correct. Coordination Quality is calibrated against what the arena requires, so the same behaviour can be healthy in one context and a failure signal in another.
Coordination Quality is a composite of five structural signals, each measuring a different failure mode:
Safety & Clarity
Counts unanswered invitations — moments where someone raised a question, surfaced a concern, or made a bid for attention and received no response. Amy Edmondson, whose 1999 study on psychological safety in work teams established that silence following a contribution functions as an ostracism signal, rapidly decreasing the contributor's self-efficacy and willingness to take interpersonal risk, found this pattern consistently. When those bids go unanswered, people stop contributing — and the group loses access to what they were carrying. This dimension is not arena-relative. It applies everywhere.
Balance
Measures whether the right people are talking relative to what the arena requires. Google's Project Aristotle identified equality in conversational turn-taking as one of two primary norms in high-performing teams — not as a fixed rule but as a standard for participation that shifts with context. A leader dominating an Ideation session crowds out the contributions the arena exists to generate. A leader who is too quiet in a Status Update creates ambiguity about direction. Balance scoring is calibrated against what the arena makes appropriate.
Reliability
Measures closure outcomes — whether the decisions, actions, and escalations the group worked through actually landed. Full closures count at full value. Partial closures — where something was agreed but an owner, deadline, or next step is missing — count at a reduced value. Sam Kaner, lead author of Facilitator's Guide to Participatory Decision-Making (Jossey-Bass, 2007), describes these as "pseudo-solutions": they create an illusion of closure that consistently fails during implementation. The 0.3x weighting reflects the known coordination debt partial closures carry into future meetings.
Focus
Measures unbounded drift — time outside the forum's intended scope that no one caught or redirected. Cynefin distinguishes between uncontrolled disorder and what it calls "messy coherence": generative tangents that are bounded and parked. Bounded drift is not penalised. Kaner makes the same distinction, arguing that tangents taken seriously and explicitly parked become structured divergent thinking rather than coordination loss. Unbounded drift is the version where no one intervened — and the work the group convened to do was displaced.
Process Clarity
Examines whether decisions were made with an explicit rule — consensus, consultative, delegated, or democratic. Kaner's facilitation framework holds that a group must establish its decision rule before reaching a decision point. Without one, the room leaves unsure whether they agreed, were consulted, or were simply informed. That ambiguity is a direct cause of decision churn. This dimension only applies to items that reached a decision point during the meeting.
The composite score has two known limitations worth understanding before acting on it.
First, silence is not a universal failure signal. In high-power-distance or collectivist cultures, silence following a contribution can be a normative expression of respect rather than an unanswered bid. The Safety & Clarity dimension measures structural absence — whether a bid was acknowledged — but interpretation requires cultural context that a score alone cannot carry.
Second, the aggregate score can obscure specific vulnerabilities. A team scoring well overall on focus and balance might consistently score poorly on reliability — and still fail to execute. The composite number is a useful starting point. The dimension-level breakdown is where intervention becomes actionable. If Reliability is low, the intervention is structural: five minutes at the end of each meeting to force owner and deadline assignment. If Safety & Clarity is low, the intervention is facilitation: closing conversational loops in the moment by naming and addressing unanswered bids directly.
A high Coordination Quality score means the meeting's structural conditions were sound: people were heard, the right people were talking, decisions landed with owners and next steps, the group stayed on the work it came to do, and everyone left knowing how decisions were made. It does not mean the decisions were correct. Content quality is separate from coordination quality. A group can coordinate well and still reach a wrong conclusion. But high coordination quality means the decision had the structural conditions to be a real one — surfaced dissent, captured rationale, assigned ownership — rather than an illusion that would unravel later.
Edmondson, A. C. (1999). Psychological safety and learning behavior in work teams. Administrative Science Quarterly, 44(2), 350–383.
Kaner, S., Lind, L., Toldi, C., Fisk, S., & Berger, D. (2007). Facilitator's guide to participatory decision-making (2nd ed.). Jossey-Bass.
Snowden, D. (1999). Liberating knowledge. In Liberating Knowledge. CBI Business Guide. Caspian Publishing.
De Smet, A., D'Auria, G., Meijknecht, L., & Albaharna, M. (2024). Go, teams: When teams get healthier, the whole organization benefits. McKinsey & Company. [Google Project Aristotle findings referenced via McKinsey TEI synthesis.]
"Use this metric as a diagnostic mirror, not an evaluative hammer. The score tells you which dimension failed. The dimension tells you what to fix."
Coordination Quality is Growth Wise's top-line per-meeting score. It measures how well a meeting's coordination worked across five dimensions: Safety & Clarity (are people being heard?), Balance (is the right person talking?), Reliability (are decisions actually landing?), Focus (is the group doing the work it chose to do?), and Process Clarity (does the group know how it's deciding?). What counts as "good" is arena-relative — a Status Update has different coordination expectations than an Ideation session. The score reflects structural quality, not effort or engagement.
Safety & Clarity (unanswered invitations — moments where bids for attention received no response), Balance (leader talk time relative to arena expectations, and whether all participants contributed), Reliability (weighted closure rate — full closures at full value, partial closures at reduced value reflecting coordination debt), Focus (unbounded drift — time outside intended scope that no one redirected), and Process Clarity (whether decisions were made with an explicit decision rule: consensus, consultative, delegated, or democratic).
Because different arenas have different coordination requirements. A Status Update expects the leader to drive the conversation; an Ideation session expects them to hold back and make space. A Planning meeting should produce scoped, assigned decisions; an Ideation session should produce possibilities. Scoring against a single universal standard would penalise behaviours that are correct for their context. Coordination Quality is calibrated against the arena the group chose to operate in.
Not exactly. Meeting quality is often used to mean whether people felt the meeting was a good use of time — a subjective, post-hoc judgment. Coordination Quality is a structural measurement: did the coordination layer function correctly during this meeting? A meeting can feel productive but have low coordination quality if decisions were partial, drift was unbounded, or process was unclear. A meeting can feel uncomfortable but have high coordination quality if difficult issues were surfaced, dissent was captured, and closures were complete.
Two limitations are worth knowing. First, silence is not a universal failure signal — in high-power-distance or collectivist cultures, silence can be normative rather than an unanswered bid, so the Safety & Clarity dimension requires cultural context to interpret accurately. Second, the composite score can mask specific vulnerabilities. A team scoring well on focus and balance but consistently low on reliability will still fail to execute. The aggregate number tells you something is off; the dimension breakdown tells you what to actually fix.
the weighted rate at which this meeting's decisions actually landed
whether the group's cognitive mode matched what the arena required
time spent outside intended scope that no one caught or redirected
the partial closures most likely to resurface in a future meeting
What makes team decisions durable, and what makes them fragile.
ResearchThe five structural patterns behind decision churn.
Category DefinitionThe full category: who it's for, how it works, what the evidence shows.
Community ResearchWhat practitioners say about closure in practice.
New articles on coordination dynamics, decision reliability, and the science of how teams actually work.
Subscribe to our newsletter