We study how teams make decisions together. These articles share what we're learning about meeting dynamics, facilitation patterns, and the science of organizational coordination.
Same coordination failures at a 6,000-person retailer, an IPO-track marketplace, and a 200-person SaaS company. The common denominator was the product development model.
Four closure signals, five decision types, and six decision rules give the DRI structural visibility into whether coordination is working or silently falling apart.
Keith Sawyer identified ten conditions for group flow. Standard corporate meeting design systematically dismantles every one of them.
DORA measures the delivery pipeline. The coordination layer upstream determines whether those numbers will be healthy or degraded, and DORA doesn’t measure it.
Google Trends data for “Directly Responsible Individual” was flat at zero for a decade. Then three structural forces hit simultaneously. The curve is a proxy for organizational pain.
Scale-ups build three-level KPI systems. None of those systems track whether the cross-functional decisions connecting all three are actually working.
Like its military namesake, the meetings industrial complex is a self-sustaining system held in place by the people who benefit from it. Calendar purges don't fix it. Coordination observability does.
You instrument your production systems. The human coordination layer — where decisions get made — runs completely dark. Growth Wise is the observability layer for that.
The Directly Responsible Individual model solves accountability diffusion. It doesn't address whether the group actually converged.
Senior engineers assigned as DRI on cross-functional features keep hitting the same wall. The role is a mini-TPM role — and most organizations never say that out loud.
The AI CEO is inevitable. But fed with activity data, it will produce perfect dashboards while coordination quality decays.
AI is collapsing execution costs. The result is not less work — it is more coordination, more decisions, and more ways for ambiguity to scale.
The bot proxy trend is real. But the question isn't whether to send the bot. It's which meetings are structurally safe for delegation.
The coordination tax in matrix organizations is structural, not cultural. 84% of the workforce operates in a matrix. Most can't see why decisions are slow.
The meeting analytics landscape has four distinct layers: transcription, action, metadata, and decision reliability infrastructure. Most organizations stop at two.
A practical evaluation framework: five criteria that separate tools addressing surface problems from those addressing structural ones.
Research from the Collective Intelligence Labs at Stockholm School of Economics shows that without structured reflection, team coordination erodes. The teams most at risk can't see it happening.
Meeting drift is when discussions gradually stray from stated agendas. Research shows it's one of the strongest predictors of poor decision outcomes.
Why some decisions stick while others reopen. A research-backed look at what makes team decisions durable.
When key perspectives are missing from meetings, decisions suffer. How to identify and address role gaps.
We study how teams make decisions together. Subscribe for new articles on coordination dynamics, decision reliability, and the science of how organisations actually work.
Subscribe to our newsletter