GROWTH|WISE
FAQ · How Growth Wise Works

What does Growth Wise actually measure?

Structural absence — not activity, not sentiment. Five concrete layers, every diagnosis anchored to a direct quote.

Direct Answer

Growth Wise measures structural absence — missing closure mechanics, fake agreement, and role gaps — across five concrete layers in a meeting transcript. It does not measure personality, emotion, or hidden intent. Every diagnosis is anchored to Level 1 evidence: verbatim quotes and timestamps. The question the system answers is not "how did the meeting feel?" but "did the meeting produce the structural mechanics that will allow its decisions to hold?" See how this connects to the broader Decision Reliability Infrastructure category.

The five measurement layers

Growth Wise treats meetings as systems and extracts data across five layers, each addressing a different dimension of coordination quality.

Layer 1: Objective participation data

The first layer is a structural scan — raw conversational data with no psychological interpretation attached. It measures how many substantive turns each person took and their share of total words spoken; the percentage of words spoken by the formal leader and their ratio of questions to statements; the exact count of explicit questions or invitations that received no response before the speaker moved on or changed the topic; and the count of uninterrupted speaking blocks exceeding roughly forty substantive words before another participant took a turn.

These numbers are descriptive, not evaluative. A high leader talk share is not inherently bad — a status update run by a senior leader expects it. The same number in a decision forum signals something different. The interpretation only comes when this layer is combined with the next one.

Layer 2: The coordination contract (Arena)

The second layer identifies what mode of collective work the group is operating under, based on coordination signals in the transcript. A sync_meeting and a decision_forum require different behaviors, different closure types, and different structural expectations. Growth Wise detects which arena is enacted, then uses it to calibrate what it expects to see in the layers that follow.

This is also where Meeting Drift is measured: whether the group shifted from one mode of work to another without naming the transition. A planning session that slides into a brainstorm without any explicit reorientation has drifted out of its arena — and the structural expectations for closure shift with it, usually without anyone in the room noticing.

Layer 3: Conversational work (Topics)

The third layer tracks the specific type of work the conversation is doing at any given moment. Is the group doing exploration work — generating ideas, surfacing perspectives, expanding the possibility space? Or decision work — narrowing toward a choice where the group has the internal authority to decide the outcome? The distinction matters because the structural expectations are different. Exploration doesn't require closure. Decision work does. A meeting where the group discusses a decision but the conversational work stays in exploration mode throughout has a structural mismatch — the arena expected a decision, but the work never reached one.

Layer 4: Structural closure outcomes

The fourth layer is where Growth Wise measures Closure Quality directly. It scans for four explicit verbal signals: decision (an explicit statement of what was agreed), owner (a named individual responsible for execution), next_step (a defined action), and time (a deadline or trigger). Each closure is measured as achieved, partial, or absent.

If the enacted arena expects an action_taken closure, and the team agrees to fix a bug but no one names an owner out loud, the closure is measured as partial. The agreement existed. The structural mechanics that allow it to hold — and generate Rework Risk if absent — did not.

Layer 5: Behavioral alignment

The fifth layer extracts three to seven verbatim quotes from the transcript and measures whether those specific behaviors were healthy, misaligned, or neutral — strictly against the rules of the enacted arena. This is not a universal judgment about whether a behavior is "good." It is an arena-relative judgment. Providing pure information is healthy in a status_update. Pitching a new solution in that same meeting is misaligned: it violates the cognitive permissions of the arena and imposes a mode of work the group didn't contract for.

The Evidence Ladder

Every diagnosis Growth Wise produces is anchored to what the system calls the Evidence Ladder. Level 1 evidence — the highest tier, and the only one used for primary diagnoses — is a verbatim quote with a timestamp. When Growth Wise identifies a partial closure, it points to the exact moment in the transcript where the discussion happened and the required signal failed to appear. When it identifies a missing owner, it quotes the passage where the decision was reached and no name was attached to execution.

This matters because the most common objection to AI-based meeting analysis is that it infers things that aren't really there. Growth Wise does not infer psychological states. It doesn't detect frustration, disengagement, or hidden disagreement. It measures explicit mechanics: was a decision stated out loud? Was an owner named? Was dissent voiced? If the signal wasn't in the transcript, the system doesn't claim it was there.

What it measures vs. what it doesn't

The distinction that matters most for buyers comparing tools is structural absence versus activity. Meeting recorders like Otter AI and Fireflies measure activity — what was said, who spoke, what topics came up. That layer is useful for reference and recall. It tells you what happened. Growth Wise measures what was missing: the explicit statement that never came, the owner that was never named, the dissent that was acknowledged but never resolved before the group moved on.

Project management tools like Asana measure hygiene — are tasks checked off? Are deadlines met? That layer is useful for tracking execution. What it can't see is the reliability of the agreement that generated the task in the first place. A task can be correctly recorded in Asana while the decision behind it was made by fake agreement. Growth Wise measures the decision, not the downstream artifact. The question shifts from "was the task logged?" to "was the agreement that created the task structurally real?"

Why each buyer cares about a different layer

A COO cares primarily about the closure layer and its organizational aggregation: how much unmeasured decision churn is causing escalation gridlock — mid-level conflicts that bypass the coordination layer entirely and land on executive calendars because no one made a binding call in the room. Every meeting that ends without an explicit decision produces candidates for that escalation queue.

A Chief of Staff or Operations leader cares primarily about the evidence layer. Growth Wise gives them a forensic record of where fake agreement happened — a specific quote, a specific timestamp, a specific meeting — that they can use to enforce structural closure without relying on their own authority or spending political capital as the bad cop. The accountability is in the transcript, not in their judgment call.

An external coach or facilitation consultant cares about the behavioral alignment layer and the longitudinal baseline. Most coaching engagements produce no measurable evidence of what changed in the room after the workshop. Growth Wise gives them team-level data showing whether the target behaviors — named owners, explicit decision statements, surfaced dissent — actually appeared in subsequent meetings, or whether the workshop produced awareness without changing the structural mechanics.

Common objections

The most frequent concern is whether an AI system can really measure human agreement. Growth Wise doesn't try to. It doesn't measure whether people genuinely agree or secretly object. It measures whether the explicit mechanics that allow agreement to function — an owner, a deadline, a stated decision — were present in the conversation. If they were, the agreement is structurally measurable. If they weren't, the system reports the absence. The question of whether internal agreement followed is outside scope by design.

The second concern is surveillance: the fear that the system is tracking individual performance. The governance boundary is strict. Growth Wise produces team-level pattern data, not individual scorecards. "The challenge role was missing from this meeting" is a team-level structural diagnosis. "Steve didn't contribute enough" is an individual evaluation that Growth Wise does not produce and is not designed to support.

The third is framework fatigue: organizations that have already spent time rolling out DACI, RACI, or RAPID and don't want another process layer. Growth Wise doesn't add a framework — it measures whether the existing one is being enacted in reality. Most organizations have frameworks on paper; almost none can verify whether those frameworks are producing explicit decisions in actual meetings. The measurement question is precise: did the 'D' in DACI actually make a binding choice on Tuesday morning, or did the meeting end with distributed ambiguity that everyone silently agreed to call a decision?

"We do not measure internal feelings or psychological safety. We measure explicit mechanics: was an owner named? Was a deadline set? Was dissent voiced? We interpret the system's structure, not the psyche."

Common questions

What does Growth Wise actually measure?

Growth Wise measures structural absence — specifically missing closure mechanics, fake agreement, and role gaps — in five concrete layers: objective participation data (turns, share of voice, unanswered invitations, monologues), the coordination contract the group is operating under (what arena they're in and what it requires), the type of work being done at any moment (exploration versus decision), structural closure outcomes (whether a decision produced an owner, a next step, and a time), and behavioral alignment against the enacted arena's rules. It does not measure personality, emotion, or hidden intent. Every diagnosis is anchored to Level 1 evidence: direct quotes and timestamps from the transcript.

How is Growth Wise different from meeting recorders like Otter AI?

Meeting recorders measure activity — what was said. Growth Wise measures structural integrity — what was missing. A transcript tells you that a decision was discussed. Growth Wise tells you whether that discussion produced an explicit statement, a named owner, a next step, and a captured rationale — or whether it produced the appearance of a decision without the mechanics that allow it to hold. The difference is the same as between a stenographer and a building inspector: one records what happened, the other tells you whether what was built will stand.

Does Growth Wise measure individual performance or team patterns?

Team patterns only. Growth Wise measures structural gaps at the group level — for example, "the challenge role was absent from this meeting" — not individual scorecards. It does not produce ratings for specific participants or flag individuals for underperforming. The governance boundary is strict: it is a mirror for the team, not a surveillance tool for management. This is by design, not constraint. The coordination failures that cause execution problems are almost always systemic, not individual.

What is the Evidence Ladder and why does it matter?

The Evidence Ladder is the system Growth Wise uses to anchor every diagnosis to direct evidence rather than inferred sentiment. Level 1 evidence — the highest tier — is a verbatim quote with a timestamp. When Growth Wise identifies a partial closure or a missing owner, it points to the exact moment in the transcript where the decision was discussed and the required signal failed to appear. This matters because Growth Wise does not infer psychological states. It measures explicit verbal mechanics: was an owner named out loud? Was a deadline stated? Was dissent voiced? If the signal wasn't present in the transcript, it wasn't present in the meeting.

Can Growth Wise measure whether our decision-making framework is actually being used?

Yes, and this is one of its most specific uses. Most organizations have decision frameworks on paper — DACI, RACI, RAPID — but almost none can verify whether those frameworks are enacted in actual meetings. Growth Wise measures the gap between the claimed process and the observable reality. If your DACI framework specifies that the Decider makes a binding choice, Growth Wise can identify whether a binding choice was explicitly stated in the meeting, or whether the group discussed options and ended without one. The framework compliance question shifts from "do our people know about DACI?" to "did DACI produce an explicit decision on Tuesday morning?"

What does Growth Wise not measure?

Growth Wise does not measure personality, psychological safety, emotional tone, hidden intent, or individual performance ratings. It does not infer what participants were thinking or feeling. It does not produce engagement scores, sentiment analysis, or culture assessments. The system treats meetings as structural systems and measures the presence or absence of specific coordination mechanics. Anything that cannot be identified from an explicit verbal signal in the transcript is outside scope.

Related pages

Deeper reading

Stay close to the research

New articles on coordination dynamics, decision reliability, and the science of how teams actually work.

Subscribe to our newsletter