Post
JA EN

Three Aggregate Indicators: Measuring Context Supply Capability Beyond Individual KPIs

Three Aggregate Indicators: Measuring Context Supply Capability Beyond Individual KPIs

Overview

Trying to manage a qualitative initiative like context supply capability with individual KPIs invites Goodhart-style runaway and hollows it out. Counting “ADR documents written” or “wiki page totals” cannot measure context supply capability. Strathern’s modern formulation1: “When a measure becomes a target, it ceases to be a good measure.”

This article unpacks the design philosophy and operation of the three aggregate indicators introduced in the sister piece “Implementation Guide for Organizational Context Supply Capability” — Context Maturity Index, Compensation Dependency, and Welcome Rate for Negative Voices — and the deliberate decision to accept subjective evaluation.

The trap of indicator design for qualitative initiatives

Goodhart’s law, restated

Charles Goodhart’s original 1975 proposition (in a monetary-policy context) was: “the moment an economic measure is adopted for regulatory purposes, the relationship between the measure and the underlying reality breaks down.” Strathern restated it in 1997 in the context of education and social policy as the modern aphorism: “When a measure becomes a target, it ceases to be a good measure”1.

Applied to organizational context supply capability:

  • “Number of ADRs” becomes a KPI → quantity rises while substance hollows out
  • “Wiki page count” becomes a KPI → pages grow alongside dead-doc inventory
  • “Number of postmortems” becomes a KPI → “we’ll be more careful next time” boilerplate piles up

Why quantitative KPIs fit qualitative initiatives badly

Qualitative initiatives inherently involve subjective judgment. “Is this document useful?” or “Is this culture healthy?” depends on the observer, the context, and the moment. The instant you convert these into something objective and quantitative, the substance leaks out.

That said, “you can’t manage what you can’t measure” is wrong here. Even subjective evaluation, taken by the same observer over time, reveals direction of change. The three indicators below are built on that premise.

Aggregate indicator 1: Context Maturity Index

For each level (-1, 0, 1, 2 — the four-layer model from the parent article), evaluate on a 5-point scale:

1
2
3
4
5
1. Absent
2. Held only by some individuals (siloed in people)
3. Documented but stale / not updated
4. Documented, current, actually referenced
5. The document is reused for AI / onboarding / handoff at exit

Operation

Once a quarter, every manager self-evaluates their own division. Leadership looks at the distribution.

  • Build a heatmap of (level × division)
  • Leadership asks “which level is the cross-org bottleneck?”
  • Each division aims to “lift one level by next quarter”

Why self-evaluation is fine

  • The manager themselves knows the current state best
  • Trying to objectivize this turns into external audit, which is expensive and triggers Goodhart-style runaway
  • “What if managers cheat on self-evaluation?” is a separate organizational problem (trust debt) and should be diagnosed separately

Aggregate indicator 2: Compensation Dependency

Once a quarter, subjectively measure: “What percentage of organizational knowledge is concentrated in three or fewer specific individuals?”

Tracking the decline from the Panopto 42% baseline

The 2018 Panopto + YouGov survey2: at U.S. enterprises (200+ employees), 42% of organizational knowledge is individual-specific and not shared. That’s a population average; whether your organization sits above or below it is org-specific. Once a quarter, managers subjectively rate “what fraction of our work would become impossible if any three or fewer people left?” Track the trend downward from the 42% baseline.

Relationship to compensation-reduction work

Compensation Dependency falling = STEP 6 (moving the compensating individual’s tacit knowledge into the organization) is working. If the Maturity Index rises but Dependency doesn’t fall, it means documentation is increasing but isn’t reducing person-dependence. This visibility is the value of qualitative indicators — quantitative KPIs cannot show it.

Aggregate indicator 3: Welcome Rate for Negative Voices

Run an anonymous quarterly survey with two questions:

  1. “In the past three months, did you raise a negative observation you noticed to the organization?” (raise rate)
  2. “How was it handled?” (welcome rate: welcomed / acted on / ignored / penalized)

Track raise rate × welcome rate.

Why the product

  • High raise rate alone → people are speaking up but no response is coming. This is organizational silence3 about to collapse
  • High welcome rate alone → few people are raising things, and there’s a sample bias (only the brave report)
  • Product → joint health of “I can say it” and “it gets heard”

Connection to the trust-debt diagnosis

An organization where the product of raise rate and welcome rate stalls at a low level is highly likely to be the “heavy trust debt” organization in the sister piece4. It’s a signal that progress from STEP 1 isn’t sticking and the four phases of trust repair are needed first.

Why subjective evaluation is acceptable

All three indicators include subjective evaluation. This is not a weakness — it’s a design choice:

  • Time-series premise: the design assumes you track quarterly trends in the same organization, so direction of change matters more than absolute accuracy
  • Not for inter-org comparison: you cannot use these to benchmark against other companies. That, too, is intentional (avoiding comparison culture)
  • Don’t rush to objectivize: trying to objectivize them invites number-fiddling (Goodhart)

A quarterly meeting that interrogates the KPIs themselves

This is an operating principle that sits above the aggregate indicators. Once a quarter, hold a meeting whose explicit job is to question the three indicators:

  • “Are we actually changing as a result of tracking these numbers?”
  • “Numbers are down but lived experience says we’re getting better — or vice versa. Which is right?”
  • “Has the indicator stopped functioning as an indicator?”

A culture that can interrogate its KPIs is the last barrier between you and KPI runaway.

Summary

  • Quantitative KPIs for qualitative initiatives produce Goodhart runaway
  • Why subjective evaluation is OK: time-series tracking is the premise; not designed for cross-org comparison
  • Aggregate indicator 1: Context Maturity Index (5-point self-evaluation)
  • Aggregate indicator 2: Compensation Dependency (individual-specific knowledge ratio)
  • Aggregate indicator 3: Welcome Rate for Negative Voices (raise rate × welcome rate)
  • Don’t rush to objectivize; interrogate the KPIs themselves once a quarter

References

  1. “Improving ratings”: Audit in the British University System — Marilyn Strathern, European Review, vol. 5, no. 3 (1997). [Reliability: High] ↩︎ ↩︎2

  2. Inefficient Knowledge Sharing Costs Large Businesses $47 Million Per Year — Panopto + YouGov (2018-07). [Reliability: Medium-High] ↩︎

  3. Organizational Silence: A Barrier to Change and Development in a Pluralistic World — Elizabeth W. Morrison, Frances J. Milliken, AMR, vol. 25, no. 4 (2000). DOI: 10.5465/AMR.2000.3707697. [Reliability: High] ↩︎

  4. Removing the Shadow of Suspicion — Peter H. Kim, Donald L. Ferrin, Cecily D. Cooper, Kurt T. Dirks, JAP, vol. 89, no. 1 (2004). DOI: 10.1037/0021-9010.89.1.104. [Reliability: High] ↩︎

This post is licensed under CC BY 4.0 by the author.