Build Your Organization's Context Supply Capability First: AI Adoption Follows as a Byproduct
This article was generated by AI. The accuracy of the content is not guaranteed, and we accept no responsibility for any damages resulting from use of this article. By continuing to read, you agree to the Terms of Use.
- Intended readers: Executives, CTOs, organizational change leaders, engineering managers, and HR / knowledge management leads who are trying to make AI tools land in their organization but are not seeing the expected returns
- Assumed background: You have evaluated or rolled out AI tools (Copilot, Claude Code, ChatGPT, etc.) inside your company
- Reading time: Full read about 40 minutes / skim about 15 minutes
Overview
“We rolled out AI tools company-wide, but we’re not seeing the value we expected.” “Only a handful of people seem to actually be getting leverage from them.” This kind of complaint is everywhere right now. The standard explanations are “not enough training,” “data isn’t ready,” or “leadership isn’t bought in.” But many companies that take all of those seriously still see disappointing ROI.
The numbers back this up. A BCG survey of 1,000 executives across 20 industries and 59 countries (October 2024) found that 74% of companies are not getting concrete value from AI, and only 4% are realizing sustained, enterprise-wide value1. MIT Project NANDA’s 2025 “State of AI in Business” report concluded that of the $30–40 billion that enterprises have invested in GenAI, 95% have seen no measurable return2. McKinsey’s “The State of AI 2025” found that AI high performers (those getting 5%+ EBIT contribution from AI) are only about 6% of companies, and the impact of AI is “20% algorithm and 80% organizational rewiring”3. Google Cloud’s DORA team summed it up in their 2025 report: “AI is an amplifier — it amplifies an organization’s existing strengths and weaknesses”4.
If you push on what these reports have in common, you end up at one easily overlooked organizational capability: the organization’s capacity to supply context. Who is doing what kind of work, why was a given decision made, what are the constraints, what does success look like — when an organization is bad at putting these things into words, both human-to-human collaboration and human-to-AI collaboration break down in the same ways. Note that the “organizational context supply capability” discussed here is a different concept from “Layer 4: Organizational Context (internal politics and evaluation systems an individual must read)” covered in the sister piece “The Five Layers of Context an IT Engineer Needs to Recognize.” Here it refers to the entire body of information an organization supplies to others (people and AI) outside the head of any one individual.
The problem runs deeper still. In many real organizations, things break at a much more fundamental level than “how to give instructions for a specific task”:
- The organization itself can’t agree on what the actual problem is
- Most employees don’t really know how the company makes money
- The mission and strategy of their own division have not been transmitted to them
- Nobody can reconstruct why this project even exists, or what was tried before
The really nasty part is that anyone who tries to surface these gaps gets labeled “a negative person.” This phenomenon — Organizational Silence5 — has been documented in the research literature for over half a century. Organizations that cannot face their problems flee into positive thinking and maintain a surface-level story that “we’re doing fine.” But genuine positive emotion6 is forward motion grounded in facing reality, which is a different thing from optimism that denies reality.
The argument of this article is about sequence. Before you try to transform your organization by adding AI, build your organization’s context supply capability. And before you can do that, you have to build a culture that welcomes negative observations. Once that is in place, you get remote-friendly operations, faster onboarding, and reduced knowledge loss when people leave as side effects — and AI ROI rides on top as another byproduct.
Decomposing the symptoms of “AI isn’t sticking”
“We rolled it out and nobody’s using it / it gives back weird answers / it’s faster to just write it myself / only a few specific people are good with it.” These look like AI capability problems on the surface, but they are usually organizational problems.
Stack Overflow’s 2025 Developer Survey (49,000 developers across 177 countries) found that 84% of developers use or plan to use AI tools, but only 3.1% report “high trust” in AI accuracy, and 46% express distrust (up sharply from 31% the prior year)7. The biggest complaints are “AI answers that are almost right but not quite right” and “spending time debugging AI-generated code.” That isn’t really a capability problem with the AI — it’s a phenomenon where the context supplied to the AI is too thin, so it returns plausible-but-off-target output.
A randomized controlled trial published by METR in July 2025 makes the picture even sharper. They assigned 246 real issues to 16 experienced open-source developers, and found that when AI use was permitted, completion times went up by 19% (confidence interval +2% to +39%)8. Developers had predicted a 24% speedup beforehand, and after the fact estimated they had been 20% faster — meaning subjective and objective measures diverged dramatically. AI was actually slowing them down, but they didn’t feel it. This is what happens when an organization has not designed how it decides what context to feed AI and what to use AI’s output for.
MIT NANDA’s 2025 study summarized the barrier to AI adoption as “not infrastructure, regulation, or talent — but learning”2: the absence of mechanisms that retain feedback, adapt to context, and improve. That is not a question about AI tool features; it is a question about organizational capability.
The four levels of context to share
The context an organization needs to share is not a single layer. It stacks, and if a lower layer is missing, the layers above can’t function.
flowchart TB
A[Level -1: Problem Recognition & Sharing<br>What is the problem / root causes / what should we solve as an organization]
B[Level 0: Foundational Context<br>How the company makes money / customers / strategy / mission]
C[Level 1: Background Context<br>Why this project / past history / stakeholders]
D[Level 2: Individual Tasks<br>Background / purpose / definition of done / constraints / decision rights / priority]
A --> B --> C --> D
| Level | Content | What happens when it isn’t shared | Example mechanisms |
|---|---|---|---|
| Level -1: Problem Recognition | What the problem is / root causes / what the org should solve | Initiatives don’t match the actual problems / divisions diverge / no motivation to even start fixing things | Retros / postmortems, issue boards, Voice of Customer, strategy reviews |
| Level 0: Foundation | How the company makes money / who customers are / strategy / division mission | Work doesn’t connect to the business / priorities drift | All-hands, strategy maps, financial literacy training |
| Level 1: Background | Why this project / past history / stakeholders | Past mistakes get repeated / proposals fail to land | Pitch documents, ADRs, project charters |
| Level 2: Individual Tasks | 7 elements (background, purpose, definition of done, constraints, decision rights, stakeholders, priority) / 5 elements (facts, judgment, decisions needed, deadline impact, blocking questions) | Output at the execution level misses the point | 1-on-1s, issues, PR descriptions, reporting templates |
Most “communication” or “1-on-1” discussions, and most tooling and AI training discussions, live at Level 2. But what most organizations are actually stuck on is Level -1 and Level 0, and tightening up the upper layers while these remain blank produces only marginal returns.
For AI specifically:
- Level -1 not shared → the organization has not agreed on “what problem are we trying to solve with AI?” Training and tools get rolled out but don’t line up with the actual challenges
- Level 0 not shared → you can’t give the AI the context it would need to judge “does this fit our revenue model?” You can’t even put your business model into the prompt
- Level 1 not shared → you can’t ask the AI for a proposal that is informed by past history
- Level 2 not shared → individual instructions produce off-context output
Many companies’ AI initiatives consist of trying to harden Level 2 without noticing that the lower layers are crumbling.
Why Level -1 is broken: the trap of organizational silence and positive thinking
Why doesn’t Level -1 (problem recognition and sharing) work? The biggest reason is that the people who point out problems get penalized. This isn’t anecdotal — half a century of organizational research keeps surfacing the same pattern.
Organizational Silence: a structure where speaking up is dangerous
Elizabeth Morrison and Frances Milliken theorized “Organizational Silence” in the 2000 Academy of Management Review5. In many organizations there is a collective-level phenomenon in which employees deliberately withhold problems and concerns. Implicit beliefs held by leadership (“employees are self-interested,” “the boss knows best”) combined with structural features (power differentials, demographic homogeneity in management) produce a shared belief that “speaking up is dangerous,” which then blocks change and learning.
Amy Edmondson’s 1999 paper on psychological safety in Administrative Science Quarterly9 showed the same structure from the opposite angle. Teams with high psychological safety engage more in “learning behaviors” — asking questions, reporting errors, suggesting improvements — and their performance rises as a result. Low-safety teams, by contrast, fall into a state of “hiding mistakes and not surfacing problems.”
The same suppression operates at the individual level. Tesser, Rosen and colleagues reported the “MUM Effect” in 1972: people experience psychological resistance to delivering bad news (more than good news), driven by guilt, fear of retaliation, and worry about being judged negatively10. The effect has been replicated robustly for over fifty years.
People who keep pointing out problems — whistleblowers and the lower-grade equivalents — face retaliation. Kate Kenny and colleagues, in a 2019 Journal of Business Ethics paper, analyzed how organizations frame whistleblowers as “emotionally unstable, untrustworthy individuals” in order to neutralize their accusations — what the authors call “normative violence”11. This isn’t only about the dramatic cases. The everyday “oh, that person is just negative” label runs on the same machinery.
This effect is especially pronounced in Japanese organizations, but the underlying dynamics generalize. Toshio Yamagishi and Midori Yamagishi’s 1994 work12 showed experimentally — contrary to the popular “collectivist and high-trust” framing of Japan — that Japanese participants showed lower general trust in strangers than Americans did. Their explanation: Japanese society maintains order through mutual surveillance and reputation-based sanctions inside closed groups (“anshin shakai,” an assurance-based society), and the cost of that arrangement is that general trust and voluntary dissent never develop. The threat of being shut out of the group governs individual behavior. The same trade-off — strong in-group control suppressing voluntary dissent — appears in any organization (regardless of country) that runs on tight in-group reputation rather than on broader, role-based trust.
Positive Thinking and Positive Emotion are not the same thing
When pointing out problems is suppressed, organizations flee in the opposite direction: “let’s stay positive,” “let’s keep things upbeat,” “no negative talk.” A “good vibes only” culture takes hold.
Here a critical distinction is in order. Positive Thinking and Positive Emotion are not the same thing.
Barbara Fredrickson, in her 2001 American Psychologist paper, presented the “Broaden-and-Build Theory of Positive Emotions”6. Positive emotions like joy, interest, contentment, and love broaden in-the-moment thought-action repertoires, and over time they build physical, intellectual, social, and psychological resources. These are emotions, and they arise out of confronting reality, not out of denying it.
Martin Seligman’s PERMA model13 takes the same stance: well-being consists of Positive Emotion / Engagement / Relationships / Meaning / Accomplishment, with Positive Emotion just one of five components. Seligman explicitly proposes “Realistic Optimism” and distinguishes it from reality-denying optimism (Pollyannaism).
The problem is that the “positive thinking” rewarded inside many organizations has slid into “thinking that refuses to look at negative reality.” That isn’t Fredrickson’s positive emotion, and it isn’t Seligman’s realistic optimism. In fact, the evidence shows it is counterproductive.
Gabriele Oettingen, in her 2014 book Rethinking Positive Thinking and in a series of peer-reviewed papers (Oettingen & Mayer 2002, JPSP)14, showed across two decades of experiments that positive fantasies — comfortably imagining the desired future as if it had already been achieved — actually reduce energy and effort and impair real-world achievement. The effect replicates across weight loss, job search, romantic relationships, and academic performance. Her alternative is WOOP (Wish → Outcome → Obstacle → Plan), which deliberately incorporates looking the obstacle in the face.
Julie Norem and Nancy Cantor reported “Defensive Pessimism” in JPSP in 198615. Some high-anxiety, high-ability individuals deliberately set low expectations and simulate possible bad outcomes in detail, converting anxiety into motivation — and as a result they actually perform better. When experimenters forced them to be positive, their performance dropped. Banning negativity actively lowers output for these people.
Recent research on Toxic Positivity points the same way. Sonia (2025)16 gives an integrative review showing that the “good vibes only” culture demanded in workplaces, on social media, and in families produces emotional suppression → psychological distress → burnout. Kaunang et al. (2025)17 report that excess workplace positivity “denies people a space to express genuine emotion, and raises the risk of stress, burnout, depression, and emotional isolation.”
In short: organizations that suppress negative thoughts and emotions simultaneously lose their learning opportunities, their motivational levers, and their ability to adapt to reality. As Charlan Nemeth has shown in her body of work on dissent18, consensus orientation drives decision-making toward bias and mediocrity, and it is dissenting voices that move organizations closer to the truth.
Aside: the same structure shows up in how you brief AI
A bit off the main thread, but worth flagging: this positive/negative balance problem applies directly to how you engineer context for AI. If you only feed the AI positive information (“our product is doing well, this feature is heavily used”) and withhold the negatives — past failures, current weak points, landmines to avoid, recent customer complaints — the AI will return optimization proposals that are detached from reality. Anthropic’s framing of Context Engineering as “the minimal set of high-signal tokens”19 presupposes that signal includes both positive and negative material. If your organizational culture suppresses negativity, the context you feed to AI will be skewed positive too, and the AI’s output will inherit that skew — which is one concrete pathway by which a positive-skewed culture drags down AI ROI.
How to escape the trap
An organization that has filtered out the negative cannot share Level -1 (problem recognition). And without that, it cannot move on to building Level 0 either. Which is why transformation initiatives — including AI ones — keep spinning in place.
The way out is to redefine “negative observations” as “positive organizational behavior”:
- Pointing out problems is loyalty to the organization, not an attack
- Confronting failure is the entry point of learning, not defeat
- Voicing dissent builds collective intelligence, not disharmony
- Talking about anxiety is a source of motivation, not weakness (Defensive Pessimism)
This isn’t motivational fluff; it’s what organizational psychology and learning science have been saying consistently. As Edmondson notes, high-psychological-safety teams report more errors (because they don’t hide them)9. An organization in which negative voices are loud is, paradoxically, closer to a healthy state than one in which they are quiet.
The vicious cycle of “compensating individuals” and a culture that doesn’t share
That said, even in organizations with weak context-sharing cultures, some people clearly do get leverage out of AI. If you watch them, a pattern emerges: they are active context collectors by default.
Concretely:
- They don’t just read the official documents — they actively comb through Slack history, watch what other teams are doing, and keep up with industry information
- They don’t just infer, they verify — they extract background through casual conversation and trace decisions backward to understand why they were made
- They can reconstruct the four levels (problem recognition / foundation / background / individual task) on their own
- When they brief AI, the context they need is already in their head, ready to assemble
The sister article “The Five Layers of Context an IT Engineer Needs to Recognize” treats this state — an individual reconstructing the five layers (technology / user / business / organization / market and society) inside their own head — from the individual point of view.
These people, in other words, personally compensate for the information their organization fails to supply. That’s why they can use AI effectively even where the organization’s context-sharing culture is weak.
A bidirectional vicious cycle
Here is where it gets nasty. “Compensating individuals” and “a non-sharing culture” reinforce each other in a bidirectional vicious cycle.
flowchart TB
A[Low context-sharing culture]
B[Organization fails to supply needed information]
C[High-compensation individuals collect it themselves]
D[Organizational knowledge concentrates in those individuals]
E[Knowledge gets siloed in people<br>42% of org knowledge is individual-specific]
F["'Just ask that person' dependency deepens"]
G[Motivation to share weakens further]
A --> B --> C --> D --> E --> F --> G --> A
In a 2018 Panopto + YouGov survey of 1,001 employees at U.S. companies with 200+ employees, 42% of organizational knowledge was held only by specific individuals and not shared, meaning if those people left, 42% of work would become impossible to perform20. That is the endpoint of this cycle expressed as a number.
Fragility that’s hard to see from above
The trap is that, viewed from leadership in the short term, the “compensating individual” pattern looks like “we have great people.” As long as that person keeps producing output, the organization’s underlying fragility never becomes visible.
Worse, the compensator themselves often has weak motivation to share. If they are being rewarded for being “perceptive” and “always one step ahead,” then writing things down and removing the personal dependency would lower their relative standing. The organization, for its part, has the comfort of “we’re fine, that person handles it,” so the investment to fix the underlying gap keeps getting deferred. The two sides’ incentives align, and personal dependency quietly hardens.
Then — at a resignation, a leave, or a transfer — it all collapses at once. AI is the same story: when the compensating individual leaves, the very context that made AI usable goes with them.
The compensating individual and the supplying organization are complements
Building the organization’s context supply capability is not about making compensating individuals unnecessary. They remain valuable. The point of building the capability is to reduce the fragility of over-relying on them, lower the cost they have to bear to compensate, and make the organization functional even when they aren’t there. Individual compensation and organizational supply are complements, and you can’t run on just one wheel.
Putting things into words is an organizational capability
“Putting context into words” looks like a personal skill, but it is also an organizational capability. The “compensating individuals” of the previous section can equally be read as: people who have to compensate personally because the organization’s capability is too low.
Ikujiro Nonaka and Hirotaka Takeuchi’s The Knowledge-Creating Company (Oxford University Press, 1995)21 distinguished tacit knowledge from explicit knowledge and presented the SECI model (Socialization → Externalization → Combination → Internalization). The crux of organizational knowledge creation is Externalization (turning tacit into explicit). Organizations weak at externalization end up running on knowledge that lives only in people’s heads, in oral tradition, and in shared atmosphere.
Edward T. Hall’s Beyond Culture (1976)22 frames the same axis culturally. High-context cultures embed meaning in shared context, nonverbal cues, and relationships, with little explicit verbalization. Japan and East Asia tend to sit on the high-context end. As long as strong shared context is intact, this is efficient. But the moment you bring in someone you can’t assume shares that context — remote teammates, multicultural collaborators, new hires, contractors, AI tools — communication cost spikes.
Ron Westrum, in a 2004 Quality and Safety in Health Care paper23, classified organizational cultures into three types: pathological (information is suppressed) / bureaucratic (information is ignored) / generative (information is actively sought). (We touched on this in the sister article as Layer 4 organizational context that an individual must read; here we look at it as a predictor of organizational performance.) Forsgren, Humble, and Kim’s Accelerate (2018)24 statistically analyzed 23,000+ responses across four years and showed that Westrum’s generative culture is a predictor of high-performing software delivery. DORA’s 2024 report observed that “unstable priorities significantly degrade productivity, and even strong leadership and rich documentation cannot fully compensate”25.
DORA 2025 collapses all of this into a single line: “AI is an amplifier — it amplifies an organization’s existing strengths and weaknesses”4.
What to build: human-to-human Context Engineering
The phrase Context Engineering1926, popularized through how to instruct AI, applies just as well to human-to-human collaboration. Anthropic defines Context Engineering as “the strategies for curating and maintaining the optimal set of tokens (information) at LLM inference time.” Translated to an organization, that becomes “the organizational capability to supply and maintain the optimal set of information for people to make decisions and take action.” At Level 2 (individual task), the elements to put in place are these.
What the instructing side (manager / requester) needs to supply
| Element | Symptom when missing |
|---|---|
| Background / why now | The recipient can’t judge priority and pours time into work that isn’t urgent |
| Purpose / goal | The output becomes an end in itself and drifts from the original purpose |
| Definition of done | “Is this done?” check-ins multiply, rework happens |
| Constraints / preconditions | Designs violate hard constraints, requiring big rework later |
| Decision rights and their boundaries | The recipient escalates things they could decide, or unilaterally decides things they shouldn’t, causing collisions |
| Stakeholders | Affected parties are missed, generating blowback later |
| Priority | Everything becomes “all important” and nothing finishes |
What the reporting side (subordinate / executor) needs to supply
| Element | Symptom when missing |
|---|---|
| Facts (progress, blockers, current state) | The manager can’t see the situation; problems blow up at the very end |
| Their own judgment and the reasoning behind it | The manager can’t evaluate the quality of judgment and slides into micromanagement |
| Decisions needed | The reporter stalls, and the manager spends time pulling out “what is it you need from me?” |
| Impact on deadlines | A delay isn’t communicated as a delay, and stakeholders’ plans cascade-fail |
| Open questions they can’t resolve | The reporter holds onto them and the decision becomes a bottleneck |
Structurally, these are isomorphic to instructing AI. The AI system prompt = the initial agreement on role and background. The codebase given to AI = the business context to grasp. AI tool definitions = decision-right boundaries. AI output format specs = report format. An organization that has Level 2 in good shape naturally supplies AI with context too. An organization that doesn’t ends up trying to get both humans and AI to “just figure it out,” and fails on both fronts.
But — to repeat — even Level 2 done well will produce only limited returns if the lower layers (-1, 0, 1) are blank.
Side benefits: remote work, onboarding, and preventing knowledge loss at exit
The ROI of building organizational context supply is enormous before you even get to AI.
The cost of failed knowledge sharing
Per the same Panopto survey, a large enterprise (averaging 17,700 employees) loses about $47 million per year to inefficient knowledge sharing — $42.5M in lost productivity plus $4.5M in inefficient onboarding20. Other findings:
- Knowledge workers waste an average of 5.3 hours per week “waiting on information from colleagues” or “rebuilding existing knowledge from scratch”
- 81% feel frustration at not being able to find information they need
These costs predate AI. The organizational capability to put context into words and into documents directly attacks them.
Shorter onboarding
The SHRM Foundation’s Effective Practice Guideline (Talya Bauer)27 estimates onboarding cost at about $4,100 per hire, ramp-up at 1–6 months, and lost productivity during ramp-up at roughly 2.5% of total annual production. Structured onboarding (which is to say, documented context supply) cuts time-to-proficiency by up to 50%. A Brandon Hall Group / Glassdoor study reports that strong structured onboarding improves new-hire retention by 82% and productivity by over 70%28.
The infrastructure for remote and async work
GitLab uses “handbook-first” as official philosophy: every policy, process, and decision goes into the public handbook first, with Slack and meetings playing only a supporting role29. Matt Mullenweg (co-founder of Automattic / WordPress.com), in his 2020 essay “Distributed Work’s Five Levels of Autonomy”30, described five stages of distributed work, where Level 3 begins “investing in robust async processes and written communication that can replace meetings,” and Level 4 reaches genuine async operation in which “people are evaluated on what they produced, not when or how they made it.”
Stripe’s documentation culture is another reference point. Former Stripe employee Brie Wolfson, in a First Round interview, described how Stripe sustains a writing culture through concrete mechanisms — project kickoff memos, a Google Group for retrospectives, “state” emails, shipped/unshipped catalogs31. A quote from Stripe’s CTO that appears in The Pragmatic Engineer’s reporting captures the essence of writing culture: “If you invest extra time to communicate ideas in clear, precise writing, you get disproportionate returns, because there are far more readers than writers”32.
These all have value independently of AI — and once they exist, they are automatically the preconditions for AI to work too.
AI adoption as a byproduct
An organization that has built all of the above puts AI tools in and gets value from them naturally. Why?
Resource allocation in BCG’s “leader” cohort (the 4% high-value group) breaks down clearly: 10% algorithm / 20% tech & data / 70% people & process1. McKinsey similarly puts AI impact at “20% algorithm, 80% organizational rewiring”3. Prompt training and tool selection live on “the 20% side”; pouring most budget and time there while the remaining 80% (the organization’s context supply capability) stays threadbare gives you a flat ROI curve.
By contrast, an organization with strong context supply capability:
- Naturally instructs AI with all four layers of context — because it does the same when instructing humans
- Has clear evaluation criteria for AI output — because the criteria for evaluating human output are already articulated
- Accumulates failure cases as organizational knowledge — because Westrum-generative culture circulates that information
- Already has the documents to feed AI — CLAUDE.md, AGENTS.md, the handbook
- Doesn’t depend on a specific compensating individual — because the context is in the organization
To borrow DORA 2025’s framing: for an organization that has its house in order, AI becomes an amplifier of capability; for one that doesn’t, AI becomes an amplifier of disorder4. Without the underlying capability, “AI adoption” turns out to be a fragile, surface-level success that depends on the continued presence of compensating individuals.
There’s a useful reverse direction too. While you are building your organization’s context supply capability, AI itself becomes a tool you can use. Writing a CLAUDE.md / AGENTS.md / prompt template doubles as onboarding documentation. Background documents written for new hires function as system prompts for AI. There is no need to maintain “documents for AI” and “documents for humans” as separate things — they are the same.
A staged approach: start at Level -1
This doesn’t have to be a big-bang transformation program. But the order matters. Starting from Level 2 training and tool selection, while the lower layers are blank, will not deliver results. Build from the bottom.
Step 1: Make Level -1 work — build a culture that welcomes the negative
The first move is a cultural shift toward welcoming people who point out problems. Without this, none of the information you’ll later try to surface ever shows up.
- Institutionalize retros and postmortems. Failures and problems are the entry point to learning, not occasions for blame
- Make a blameless culture explicit. Don’t argue “who is at fault”; argue “what happened” and “what to change”
- Have leadership and managers go first in articulating their own failures — embodying psychological safety in Edmondson’s sense9
- Publicly thank and reward people who raised “negative” observations — bake it into the evaluation system
- Cultivate positive emotion, not positive thinking. Welcome modes of thought that confront anxiety and obstacles directly, in the spirit of Defensive Pessimism15 and WOOP14
Side effects: problems become visible faster, and the tacit knowledge of compensating individuals starts moving into the open of its own accord.
Step 2: Build Level 0 and Level 1 — share foundation and background
Once an atmosphere exists in which problems can be discussed, you can start building the foundation context.
- In all-hands, strategy reviews, and quarterly reviews, repeatedly share the company’s revenue model, KPIs, and customer profile
- Templatize project Pitch documents and ADRs (Architecture Decision Records) so “why are we doing this / why did we pick this option” gets recorded
- Accumulate retros from past projects in a searchable form
Side effects: shorter onboarding, less friction in cross-team collaboration.
Step 3: Build Level 2 — templatize instructions and reports
Now you can take on the quality of instructions and reports at the manager-on-the-ground level.
- For your next instruction or request, deliberately verbalize the seven elements from the manager’s side (background / purpose / definition of done / constraints / decision rights / stakeholders / priority)
- Lightly standardize a template for upward reports that includes facts / one’s own judgment and reasoning / decisions needed / deadline impact / open questions
- For the people who get “just ask them, it’s faster” reactions, work with them to gradually externalize what’s in their head. Even a 1-on-1 question like “what did people ask you about recently? could that go into a doc?” begins moving knowledge from one person into the organization
Side effects: fewer back-and-forth confirmations, easier transfer of the same elements into AI prompts, lower over-dependence on compensating individuals.
Step 4: Reuse what you wrote, by feeding it to AI
Don’t throw away the context you’ve accumulated in steps 1–3. Centralize it. Wiki, README, CLAUDE.md — whichever. AI adoption arrives naturally at this step. Onboarding material, knowledge transfer at exit, and AI input material are the same artifact.
A caveat: not everything is a verbalization problem
It would be too simple to attribute every stalled AI initiative to “the organization’s context supply capability is too low.” The claim of this article is not that this is the only cause — it is that it is a frequently overlooked, high-leverage necessary condition.
Executive commitment, licensing, data readiness, model selection, security, regulatory compliance — there are other factors in AI adoption. But those are relatively well-discussed. Organizational context supply capability, especially the dysfunction of Level -1 (problem recognition), barely makes it onto the agenda. It isn’t a cost line you see day-to-day, so investment in it tends to look like “we won’t see clear numbers from this.”
In reality: Panopto’s $47M annual loss from inefficient knowledge sharing20, SHRM’s 2.5% productivity loss during onboarding27, BCG’s “leaders pour 70% into people and process”1 — each of these is grounds to invest in this capability. AI ROI sits on top as a byproduct, which makes for a cleaner investment story.
It’s also crucial not to confuse “welcoming the negative” with “letting negativity flow unchecked.” Defensive Pessimism and WOOP are modes of thought that use negativity constructively; they are not permission to wallow in pessimism or grievances. Likewise, forcing everyone to write everything down backfires. Avoid perfectionism: start at the practical level of “the granularity you would feed to AI” or “the granularity you would hand to a new hire,” and keep it sustainable. The same posture applies to compensating individuals — don’t villainize them, respect their tacit knowledge, and migrate it gradually.
Summary
- Stalled AI adoption is usually not an AI capability problem but an organizational context supply capability problem. BCG: 74% can’t extract value from AI1. MIT NANDA: 95% of GenAI investment yields no measurable return2. McKinsey: 80% of AI impact is organizational rewiring3. DORA 2025: “AI is an amplifier”4
- The context an organization needs to share has four levels (Level -1: Problem Recognition / Level 0: Foundation / Level 1: Background / Level 2: Individual Tasks). Most organizations are stuck at Level -1 and Level 0, and tightening only Level 2 (how to give instructions) yields limited returns
- Level -1 is broken because of organizational silence5 and the positive thinking trap — a culture that suppresses negative observations (MUM effect10, retaliation against whistleblowers11, assurance-based-society dynamics12) combined with forced optimism that doesn’t distinguish itself from Realistic Optimism (Toxic Positivity1617, positive fantasy impairing achievement14, suppressed Defensive Pessimism15) — keeps the organization running away from its problems
- Positive thinking (cognition) and positive emotion (emotion) are not the same thing. Per Fredrickson6 and Seligman13, real positive emotion is built on top of confronting reality
- “Compensating individuals” can use AI even where organizational context supply is weak, but they create personal-dependence and feed a bidirectional vicious cycle that ends at “42% of organizational knowledge is individual-specific”20
- Side effects of building this capability: ~$47M/year in knowledge-sharing inefficiency reduced20, onboarding time-to-proficiency cut by up to 50%27, 82% better retention28, real remote/async infrastructure29303132 — and AI ROI rides on top
- A staged approach: (1) build a culture that welcomes the negative, (2) share foundation and background, (3) templatize instructions and reports, (4) feed what you wrote to AI. Reversing the order makes each step less effective
Sequence is the point. Not “let’s add AI and improve productivity,” but “build a culture that welcomes the negative, build the organization’s context supply capability, and AI adoption accelerates as a byproduct on top of that.” Without it, AI investment will simply amplify your organization’s existing disorder, and any “success” will rest on the shoulders of a few compensating individuals — fragile, surface-level success. With it, AI ROI rises naturally, and well before that you collect significant returns from remote-friendly operations, faster onboarding, and reduced knowledge loss at exit. Individual compensation and organizational supply are complements; you can’t run sustainably on one wheel. And for both wheels to turn, you first need air in the room that says: “It’s okay to talk about problems.”
Related articles
You may also be interested in these related pieces:
- The Five Layers of Context an IT Engineer Needs to Recognize: The Range Technical Skill Alone Cannot Reach — A map of the context an individual needs to recognize. This is the inventory that the “compensating individuals” of this article carry in their heads
- Why Engineering Management Becomes a Penalty Game in Japan: Three Separations — The structural reasons why organizational context breaks down inside Japanese companies
- The EM’s Six Subordinate Types: A Playbook — Communication design between manager and report
- AI-Optimized Markdown Documentation: Designing Documents for Agents to Read — Techniques for handing organizational context documents to AI
- Building an AI-Native Engineering Team: A Guide — Practical guide to integrating AI into the organization
References
References below are listed in the order in which they are cited in the body.
AI Adoption in 2024: 74% of Companies Struggle to Achieve and Scale Value / Where’s the Value in AI? — Boston Consulting Group (2024-10). Survey of 1,000 CxOs across 20 industries and 59 countries. 74% can’t extract value; only 4% realize enterprise-wide value. Resource allocation among leaders: 10% algorithm / 20% tech & data / 70% people & process. [Reliability: High] ↩︎ ↩︎2 ↩︎3 ↩︎4
The GenAI Divide: State of AI in Business 2025 — MIT Project NANDA (2025-08). Of $30–40B enterprise GenAI investment, 95% see no measurable return. The barrier is “not infrastructure, regulation, or talent — but learning: lack of mechanisms to retain feedback, adapt to context, and improve.” [Reliability: High] ↩︎ ↩︎2 ↩︎3
The State of AI 2025: Agents, innovation, and transformation — McKinsey & Company / QuantumBlack (2025-11). 88% are using AI; AI high performers (5%+ EBIT contribution) are only ~6%. AI impact is “20% algorithm, 80% organizational rewiring.” [Reliability: High] ↩︎ ↩︎2 ↩︎3
2025 DORA State of AI-Assisted Software Development Report — Google Cloud / DORA (2025). “AI is an amplifier — it amplifies an organization’s existing strengths and weaknesses.” Returns on AI investment depend more on strengthening the underlying organizational practices (the sociotechnical system) than on the technology. [Reliability: High] ↩︎ ↩︎2 ↩︎3 ↩︎4
Organizational Silence: A Barrier to Change and Development in a Pluralistic World — Elizabeth W. Morrison, Frances J. Milliken, Academy of Management Review, vol. 25, no. 4 (2000). DOI: 10.5465/AMR.2000.3707697. Theorizes organizational silence: implicit leadership beliefs and organizational structure produce a shared belief that “speaking up is dangerous.” [Reliability: High] ↩︎ ↩︎2 ↩︎3
The Role of Positive Emotions in Positive Psychology: The Broaden-and-Build Theory of Positive Emotions — Barbara L. Fredrickson, American Psychologist, vol. 56, no. 3 (2001). DOI: 10.1037/0003-066X.56.3.218. Positive emotions broaden thought-action repertoires and build resources. A different construct from cognitive “positive thinking.” [Reliability: High] ↩︎ ↩︎2 ↩︎3
2025 Developer Survey — AI — Stack Overflow (2025). 49,000 developers across 177 countries. 84% use or plan to use AI tools; only 3.1% report high trust; 46% express distrust (up from 31% the prior year). The largest complaint is “AI answers that are almost right but not quite right.” [Reliability: High] ↩︎
Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity — METR (2025-07-10). arXiv:2507.09089. RCT with 16 experienced developers and 246 real issues. With AI permitted, completion time increased 19% (CI +2% to +39%). [Reliability: High] ↩︎
Psychological Safety and Learning Behavior in Work Teams — Amy C. Edmondson, Administrative Science Quarterly, vol. 44, no. 2 (1999). DOI: 10.2307/2666999. Psychological safety activates learning behaviors; high-safety teams report more errors (because they don’t hide them). [Reliability: High] ↩︎ ↩︎2 ↩︎3
On the reluctance to communicate bad news (the MUM effect): A role play extension — Tesser, Rosen, Tesser, Journal of Personality, vol. 40, no. 1 (1972). DOI: 10.1111/j.1467-6494.1972.tb00651.x. The MUM effect. Guilt, fear of retaliation, and concern over negative judgment suppress delivery of bad news. Replicated robustly for over 50 years. [Reliability: High] ↩︎ ↩︎2
Mental Health as a Weapon: Whistleblower Retaliation and Normative Violence — Kate Kenny, Marianna Fotaki, Stacey Scriver, Journal of Business Ethics, vol. 160, no. 3 (2019). DOI: 10.1007/s10551-018-3868-4. Organizations neutralize whistleblowers by framing them as “emotionally unstable” — normative violence. [Reliability: High] ↩︎ ↩︎2
Trust and Commitment in the United States and Japan — Toshio Yamagishi, Midori Yamagishi, Motivation and Emotion, vol. 18, no. 2 (1994). DOI: 10.1007/BF02249397. Experimentally refutes the “Japan is collectivist and high-trust” framing. Mutual surveillance and reputation-based sanctions inside closed groups produce an “anshin shakai (assurance-based society),” at the cost of voluntary dissent. [Reliability: High] ↩︎ ↩︎2
Flourish: A Visionary New Understanding of Happiness and Well-being — Martin E. P. Seligman, Free Press / Simon & Schuster (2011). ISBN: 9781439190760. The PERMA model (Positive Emotion / Engagement / Relationships / Meaning / Accomplishment) and Realistic Optimism, distinguished from reality-denying Pollyannaism. [Reliability: High] ↩︎ ↩︎2
Rethinking Positive Thinking: Inside the New Science of Motivation — Gabriele Oettingen, Current / Penguin Random House (2014). ISBN: 9781617230233. Underlying empirical paper: Oettingen & Mayer (2002) JPSP 83(5). Positive fantasy impairs achievement; the alternative is WOOP (Wish → Outcome → Obstacle → Plan). [Reliability: High] ↩︎ ↩︎2 ↩︎3
Defensive Pessimism: Harnessing Anxiety as Motivation — Julie K. Norem, Nancy Cantor, Journal of Personality and Social Psychology, vol. 51, no. 6 (1986). DOI: 10.1037/0022-3514.51.6.1208. Defensive pessimists set deliberately low expectations and simulate bad outcomes, converting anxiety into motivation. Forcing them to be positive lowers their performance. [Reliability: High] ↩︎ ↩︎2 ↩︎3
The Dark Side of Positivity: How Toxic Positivity Contributes to Emotional Suppression and Mental Health Struggles — Sonia, The International Journal of Indian Psychology, vol. 13, no. 2 (2025). DOI: 10.25215/1302.104. Forced “good vibes only” in workplaces, social media, and families produces emotional suppression → psychological distress → burnout. [Reliability: Medium] ↩︎ ↩︎2
Analysis of Toxic Positivity Behavior and Its Impact on Individual Mental Health in the Workplace — Kaunang et al., Journal of the American Institute, vol. 2, no. 5 (2025). DOI: 10.71364/9rkkkh61. Workplace over-positivity denies people a space for genuine emotional expression and raises stress, burnout, and depression. [Reliability: Medium] ↩︎ ↩︎2
In Defense of Troublemakers: The Power of Dissent in Life and Business — Charlan Jeanne Nemeth, Basic Books (2018). ISBN: 9780465096299. Consensus orientation produces bias, mediocrity, and error. Minority dissent — whether right or wrong — moves the group closer to the truth. [Reliability: Medium-High] ↩︎
Effective context engineering for AI agents — Anthropic Engineering (2025-09-29). Strategies for curating and maintaining the optimal set of tokens (information) at LLM inference time. [Reliability: High] ↩︎ ↩︎2
Inefficient Knowledge Sharing Costs Large Businesses $47 Million Per Year — Panopto + YouGov (2018-07). Survey of 1,001 employees at U.S. companies with 200+ employees. Large enterprises (avg 17,700 employees) lose $47M/year. 5.3 hours/week wasted on information waiting; 42% of organizational knowledge is individual-specific; 81% feel frustration at not finding the information they need. [Reliability: Medium-High] ↩︎ ↩︎2 ↩︎3 ↩︎4 ↩︎5
The Knowledge-Creating Company: How Japanese Companies Create the Dynamics of Innovation — Ikujiro Nonaka, Hirotaka Takeuchi, Oxford University Press (1995). ISBN: 978-0195092691. Tacit/explicit knowledge distinction and the SECI model (Socialization → Externalization → Combination → Internalization). [Reliability: High] ↩︎
Beyond Culture — Edward T. Hall, Anchor Books / Doubleday (1976). ISBN: 978-0385124744. Establishes the high-context / low-context culture distinction. [Reliability: High] ↩︎
A typology of organisational cultures — Ron Westrum, Quality and Safety in Health Care, vol. 13, Suppl 2 (2004). DOI: 10.1136/qshc.2003.009522. Classifies organizational cultures into pathological, bureaucratic, and generative types. [Reliability: High] ↩︎
Accelerate: The Science of Lean Software and DevOps — Nicole Forsgren, Jez Humble, Gene Kim, IT Revolution Press (2018). ISBN: 978-1942788331. Statistical analysis of 23,000+ responses. Westrum-generative culture is a predictor of high software-delivery performance. [Reliability: High] ↩︎
2024 Accelerate State of DevOps Report — Google Cloud / DORA (2024). Unstable priorities significantly degrade productivity; even strong leadership and rich documentation can’t fully compensate. [Reliability: High] ↩︎
Context Engineering for Coding Agents — Birgitta Böckeler, martinfowler.com (2026-02-05). A four-way classification of context for coding agents and a warning against the “illusion of certainty.” [Reliability: Medium-High] ↩︎
Onboarding New Employees: Maximizing Success — Talya N. Bauer, SHRM Foundation Effective Practice Guideline. Onboarding cost ~$4,100/hire, ramp-up 1–6 months, ~2.5% productivity loss/year. Structured onboarding cuts time-to-proficiency by up to 50%. [Reliability: High] ↩︎ ↩︎2 ↩︎3
Unlocking the Power of Onboarding to Aid Employee Retention — Brandon Hall Group (2015, study commissioned by Glassdoor). Strong structured onboarding improves retention by 82% and productivity by over 70%. [Reliability: Medium-High] ↩︎ ↩︎2
The importance of a handbook-first approach to communication — GitLab Inc. (continuously updated). Handbook-first philosophy: every policy, process, and decision is written into the public handbook first. [Reliability: High] ↩︎ ↩︎2
Distributed Work’s Five Levels of Autonomy — Matt Mullenweg, ma.tt (2020-04). Defines five levels of distributed work. Level 3 invests in written/async communication; Level 4 reaches “evaluating people on what they produced, not when or how they made it.” [Reliability: High] ↩︎ ↩︎2
From kickoffs to retros and Slack channels — Stripe’s documentation best practices with Brie Wolfson — First Round Review (2023). Testimony from former Stripe employee Brie Wolfson on mechanisms like project kickoff memos, retrospectives Google Group, “state” emails, and shipped/unshipped catalogs. [Reliability: High] ↩︎ ↩︎2
Inside Stripe’s Engineering Culture, Part 2 — Gergely Orosz, The Pragmatic Engineer (2024). Reporting on Stripe’s internal “Trailhead” documentation portal and writing culture. CTO quote: “If you invest extra time to communicate ideas in clear, precise writing, you get disproportionate returns, because there are far more readers than writers.” [Reliability: Medium-High] ↩︎ ↩︎2