Post
JA EN

An EM's Real Job: Putting People on the Side That Gives AI Instructions — Building a Practice Ground Through 3 Separations

An EM's Real Job: Putting People on the Side That Gives AI Instructions — Building a Practice Ground Through 3 Separations
  • Target audience: IT engineering managers and tech leads, HR and executives rolling out AI organization-wide, engineers who want to grow their own AI-instruction skills
  • Prerequisites: General workplace vocabulary (1:1s, task assignment, prompts)
  • Reading time: Approx. 40 min full read / 15 min for key points

Overview

“Use AI more.” “Stop waiting for instructions.” Many workplaces issue both slogans, and both spin in place. The starting point of this article is to refuse to treat them as separate slogans. The actual goal is to make people capable of using AI. Breaking out of instruction-waiting behavior is just one of several means to that end. Productivity in the AI era depends on whether humans can stand on the side that gives AI its instructions1. But that shift does not happen through individual mindset alone.

Why not. As the prior article Why packing all of management onto one person turns the role into a punishment game diagnosed, Japanese IT engineering managers (EMs) carry a structural load: roughly 96% are playing managers, 39.9% of their time goes to individual-contributor work2, 21.4% of members express interest in becoming managers3, and employee engagement sits at 6%4. They absorb AI management, people management, business management, specialist people work, and legal/psychological/values judgments — all on one person. Inside that punishment-game structure, the venue in which a direct report could practice giving AI instructions is never set up by the organization. Until the EM builds the structure, individual breakouts cannot start.

This is the core of the article. The three separations introduced in the prior article — by target (AI vs. humans), by domain (business management vs. specialist people management), and by layer of debate (legal facts / psychological tendencies / personal values) — are exactly the building blocks of a safe practice ground that structurally raises a team’s AI-instruction skill. As Hidi & Renninger’s four-phase interest-development model5 and Self-Determination Theory’s research on intrinsic motivation6 both show, a new skill takes root as personal interest only after the environment first triggers situational interest. The trigger for AI-instruction skill does not fall from the sky. Someone has to lower it.

Concretely, each of the three separations works as a different layer of the practice ground. Separation 1 (target) frames AI management as an individual skill everyone learns regardless of management track. Separation 2 (domain) embeds hands-on AI delegation and review inside ordinary business management. Separation 3 (layer of debate) keeps EMs out of values territory, which is what preserves the psychological safety the practice requires. Only when all three are in place do direct reports drift naturally onto the side that gives AI its instructions.

That said, this article does not claim every EM can put this into practice tomorrow. An EM in the middle of the punishment game has only one starting move available: escalate the case for the three separations upward themselves. Even so — the moment an EM recognizes the structural problem, the organization’s AI capability begins to move. That single point is the article’s only real claim.

Why “fixing instruction-waiting” should not be the goal

Before going further, the framing has to be set straight.

“I want to fix this person’s instruction-waiting habit.” Many EMs hold this wish, and it is almost guaranteed to dead-end if you make it the goal. There are two reasons.

First, instruction-waiting is a state, not a personality trait7. Ever since Martin Seligman’s 1960s work on learned helplessness, psychology has consistently shown that passivity is an adaptive strategy produced by the environment7. The thing to “fix” is the environment, not the person — and when an EM pressures the individual to “be more proactive,” it violates the Self-Determination Theory need for autonomy and tends to reinforce passivity instead6.

Second, and more fundamental, breaking out of instruction-waiting is not itself the goal. What the AI era actually demands is not “active people” in the abstract but “people who can deliver outcomes by giving AI good instructions.” A widely-shared post by an engineering manager with 25 years of experience organizes the survival conditions for the AI era into three: “mindset (don’t sit and wait for tasks),” “technical skill (give AI appropriate instructions),” and “collaboration skill”1. This article re-reads those three not as independent requirements but as a structure in which the first mindset exists in service of reaching the latter two — the position being that breaking out of instruction-waiting is not a self-contained goal.

flowchart TB
  GOAL["Goal<br>Deliver outcomes with AI"]
  MEANS1["Means A<br>Break instruction-waiting"]
  MEANS2["Means B<br>Deploy AI tooling"]
  MEANS3["Means C<br>Prompt training assets"]
  EM["EM structural support<br>Practice ground via 3 separations"]
  GOAL --> MEANS1 & MEANS2 & MEANS3
  EM --> MEANS1 & MEANS2 & MEANS3

In other words, breaking instruction-waiting is just one of several means to the actual goal of AI capability. Mandatory tool rollouts and internal prompt templates sit alongside it as parallel means. If you frame “fixing instruction-waiting” as an independent goal, you turn what should be solved by a combination of structural moves into a demand to remodel the individual’s interior.

This is the pivotal turn for the EM. As discussed in the prior article The “don’t try to change people” EM playbook, “change the report’s mindset” has bad cost-benefit both legally and psychologically, and it routinely steps onto legal red lines8. Meanwhile, “set up an organizational practice ground for giving AI instructions” sits entirely inside the EM’s authority and does not damage the report’s autonomy — it lives in the safe verbs of observe, arrange, place8.

Goal: AI capability. Means: structuring the practice ground. Treat breaking out of instruction-waiting as a milestone that direct reports pass through naturally on that practice ground. That is the frame of this article.

Why nothing starts without EMs’ structural support

“If we just fix the mindset, AI will get used.” This is another common counter. There are four reasons mindset alone is not enough.

Reason 1: AI capability requires organizational context

As Ryutaro Yoshiba (Ryuzee) points out in his slides on team design in the generative-AI era, the manager/leader role in the AI era shifts from issuing instructions to organizing context and supporting change9. Giving instructions to AI agents only produces results when the surrounding context — domain knowledge of the target system, baseline constraints, team conventions, recent decisions — is aligned. Organizing context cannot be done by an individual; it has to be arranged at the organizational level.

The Harvard Business School–BCG joint experiment by Dell’Acqua and colleagues (758 consultants, 18 tasks, 2023) supplies the quantitative backing. Subjects using GPT-4 completed 12.5% more subtasks on average and produced answers that were over 40% higher quality in domains AI was good at (inside the “jagged frontier”) — and performed worse in domains AI was bad at10. The crucial point is that judging which tasks suit AI was not obvious even to expert knowledge workers10. Ryuzee’s slides put it well: “AI is an amplifier — it magnifies the strengths of healthy organizations and the dysfunctions of broken ones”9.

If the report’s mindset shifts but the organization’s context is unorganized, AI will not act as an amplifier. Organizing context is exactly the EM’s job.

Reason 2: The first step requires a place where failure is safe

Everyone is bad at giving AI instructions at first. Vague instructions, inflated expectations, mistaken premises — through those failures, you slowly build a feel for what to convey to AI and how. Hidi & Renninger’s four-phase interest-development model shows that a new interest takes root only after passing through “triggered situational interest → maintained situational interest → emerging individual interest → well-developed individual interest”5. The first triggered situational interest needs a place to happen. That place is the practice ground the organization sets up.

That practice ground has to come with a guarantee — backed by Edmondson’s work on psychological safety — that failure will not be punished11. Whether “I asked AI and got a weird answer” or “I burned an hour iterating on prompts” gets treated as wasted time in evaluation conversations is a judgment that sits inside the EM’s authority and shapes the team’s culture.

Reason 3: With 6% engagement, individual effort is not a viable strategy

As a structural premise: in Gallup’s 2024 survey, Japan’s employee engagement was 6% (world average 23%)4, and member-level interest in management at 21.4% is among the lowest in the world3. The premise of “let individuals figure it out by sheer motivation” does not statistically hold.

This is not an argument that individual effort is unnecessary. In an environment where the majority cannot rely on individual motivation, the organization providing learning opportunities does not negate individual effort — it raises the probability that individual effort actually pays off. This is in line with the prior article’s distinction: filter at hiring, fit-to-role for existing employees12. The same applies to AI-instruction skill. Build an environment where those willing to learn can, and find different roles for those who do not. The EM’s job concentrates on the former.

Reason 4: “People who can use AI without a venue” are exceptions, not strategy

Some EMs, reading this, will object: “We have people who use AI heavily even though we never set up a venue.” Such people do exist. But they are exceptions, not the load-bearing pillar of any organizational AI strategy — and that is the fourth reason structural support is needed, viewed from the other side.

Look at the numbers and the small denominator becomes obvious. In the Stack Overflow Developer Survey 2024, top-tier developers report that 68% code as a hobby and 40% study outside work for career development — but flipped, that means even at the top of the industry, 32% don’t write code as a hobby12. OECD PIAAC 2023 data on Japan shows that even where a learning culture exists, about half of employees don’t participate12. The classical Mas & Moretti work on peer effects gives an elasticity of 0.15–0.17 (a 10% rise in coworker productivity raises the individual’s by 1.5–1.7%) — present, but not dramatic12. People who self-drive without an environment exist; they are not the majority.

These self-driving exceptions are also structurally fragile. Without an organizational venue, they often maintain their own practice ground on their own time, after hours. That is a personal choice and worth respecting — but the moment the organization concludes “we have these people, so we don’t need the three separations,” it loads itself with three risks: (1) burnout, (2) drift to organizations that do have a better practice ground, and (3) reproducing the wrong story that “people who don’t self-drive are lazy.”

As organizational strategy, depending on a small set of exceptions is the same thing as relying on luck. The prior article was explicit: “a workplace where everyone enjoys learning” is a worthwhile aspiration, but the research evidence on its actual probability is limited12. Replacing the exceptions with a reproducible mechanism — that is exactly what structuring the practice ground via three separations is.

The existence of exceptions is not evidence that structural support is unnecessary. It is indirect evidence that without structural support, the majority is not reached. That re-reading is the heart of reason 4.

Operating the 3 separations as the AI-instruction practice ground

This is the core of the article. Re-operationalize the three separations from the prior article through the lens of an AI-instruction practice ground.

flowchart TB
  TASK(("AI-instruction<br>practice ground"))
  TASK --> S1["Separation 1: Target<br>AI management as<br>everyone's individual skill"]
  TASK --> S2["Separation 2: Domain<br>Embed AI delegation<br>in business management"]
  TASK --> S3["Separation 3: Layer<br>Skip values intrusion<br>preserve psych safety"]

Separation 1 (target): make AI management an individual skill everyone owns

The first separation cleanly splits management theory into AI-facing and human-facing as two different things. As discussed in the prior article, Camille Fournier’s delegation, review, and feedback practices13 and Andy Grove’s TRM (Task-Relevant Maturity)14 reuse extremely well as instruction-design for AI agents. On the other hand, as Self-Determination Theory6 makes clear, dragging “clear instructions and strict review” — which works for AI — into human management violates the autonomy need and backfires.

When the EM names that split inside the organization, that announcement is the entrance to the AI practice ground.

What the EM should do:

  • State publicly: “AI management is an individual skill for everyone, regardless of management track.” For reports who never want to be managers, growing AI-management as an individual skill is still a rational career investment. Cleanly cut the implicit chain “AI is easier if you know management theory → therefore everyone should aim to be a manager”15
  • Build internal materials and case libraries that articulate task granularity, output review, and TRM evaluation in AI-facing language. Even just relabeling existing management training material into “reusable for AI” vs. “human-only” sections produces value
  • Have the EM personally embed AI-agent delegation and review in daily work, modeling the behavior. Shift the contents of player-side work from “writing code directly” to “delegating design to AI and reviewing it” — that lets the EM keep the high playing-manager ratio while becoming the AI role model

What the EM should avoid:

  • Carrying the AI-facing instinct of “clear instructions and strict review” over to direct reports. Engineers with long histories of AI-management success fall into this trap most easily — it’s the AI-era variant of the Peter Principle16
  • Short-circuiting “management theory helps with AI” into “therefore everyone should aim to be a manager”

Separation 2 (domain): embed hands-on AI-delegation practice inside business management

The second separation draws a boundary inside human management: business management that the EM handles directly versus specialist people management that should be handed off to specialists17. From the AI practice-ground angle, this means embedding “hands-on AI-instruction practice” into the business-management side as part of its standard structure — that is how separation 2 actually operates.

Concrete examples follow.

In 1:1s, practice verbalizing “what would I have liked to ask AI?”

Standard 1:1s are used for progress checks and concern sharing. Add a recurring question: “In the last week, what task did you wish you could hand off to AI but couldn’t quite instruct it on?” This is practice in putting “how would I instruct AI” into the report’s own words — directly connected to what the 25-year EM calls “design and verbalization skill is the new technical skill”1.

The EM’s role is not to supply the answer. It is to work through with the report what was vague about that task and what failed to come through. This stays inside business management — not specialist people management like mental-health assessment or career redesign — and stays inside the EM’s authority.

Re-tune Jira ticket granularity to “what AI can chew on”

Jira/Linear-style ticket granularity has historically been optimized for human comprehension. Once you assume AI agents will do delegation work, the optimal granularity changes. Rewriting tickets so AI can solve them in one shot — with explicit context, acceptance criteria, and related files — itself raises the team’s AI-instruction skill.

This is not the EM rewriting individual tickets. It is a team-wide ticket-template update and operational migration — a business-management design decision. It is the concrete implementation of what Ryuzee calls “organizing context”9.

Hand AI-driven 1:1 summaries and routine work to direct reports, not just the EM

Deploying AI tools to reduce the EM’s cognitive load is usually framed as “EM efficiency.” Flip the perspective: let direct reports run AI summaries and AI agendas themselves, and you turn ordinary work into hands-on AI-instruction practice.

  • 1:1 minutes summaries → the report has AI summarize and sends to the EM
  • Status reports → the report has AI format and posts to Slack
  • Document drafts → standardize on having AI write the first pass and the report editing it

These are not “the EM dumping their work on direct reports.” They are a design where giving AI instructions repeats inside the daily work flow. The “triggered situational interest” from Hidi & Renninger’s interest-development model5 gets embedded naturally inside ordinary work.

Before judging “they’re not growing,” cut through with the 5-factor matrix

When a report’s AI usage isn’t progressing, EMs tend to diagnose on a single axis: “no motivation.” The 5-factor matrix from the prior article (capability, situational, career view, environmental, motivation structure)17 needs to be applied to the AI context too.

FactorWhat it looks like when AI usage stallsEM’s range
CapabilityIndividual differences in abstraction/verbalization, neurodiversity, fit to roleLimited (occupational health / specialist territory)
SituationalCaregiving, childcare, health, mental healthLimited (HR / occupational health territory)
Career viewRational decision that AI usage doesn’t pay back for themDiscussion between report and HR
EnvironmentalInsufficient learning opportunity, role models, psychological safetyEM’s responsibility
Motivation structureLong-running deficit in SDT’s three needs (autonomy, competence, relatedness)Environment is EM’s; deeper layers are specialist territory

The EM’s lane is environmental factors and the entrance to motivation structure. When other factors come into view, hand off to specialists without hesitation. Refusing to absorb “I couldn’t develop a report into an AI user” is the starting point that stops the punishment-game spiral17.

Separation 3 (layer of debate): avoid values intrusion to preserve practice-ground safety

The third separation refuses to mix legal facts, psychological tendencies, and personal values17. In the AI practice-ground context, this specifically means treating “willingness to use AI” as values territory and respecting it as such.

When an EM says “I want you to use AI more proactively” to a direct report, it’s worth asking what that statement is grounded in.

  • Legal/contractual expectation (a level expected to be reached during paid work hours) → state explicitly
  • Suggestion grounded in a psychological tendency (probabilistic claim that AI capability is advantageous over a long career) → present as information
  • Expectation that crosses into personal values (“I want you to be more interested in AI,” “I want you to play with it after hours”) → tread carefully; this can become intrusion into the report’s life choices

What requires special caution is whether AI capability is becoming a hidden mechanism for demanding off-hours learning. As the prior article discussed, employers do not have the right to mandate off-hours activity18. Smuggling in “everyone in this industry plays with AI after hours, that’s the norm” reproduces exactly the same problem as the implicit demand for after-hours study19. (This is sharper in Japan, where labor law and the membership-employment model make off-hours mandates an explicit legal red line; in jurisdictions with at-will employment, the same dynamic still produces engagement and turnover problems even when it’s legally permitted.)

What an organization can supply as a practice ground is only a frame for trying out AI instructions inside paid work hours. The voluntary exploration of those who also want to play with AI after hours is to be respected — but assuming it implicitly and applying that assumption to everyone is the textbook three-layer mix-up.

flowchart TB
  LEGAL["Legal facts<br>No off-hours mandate"]
  PSYCH["Psychological tendency<br>AI skill helps long-term"]
  VALUE["Personal values<br>Use after hours or not"]
  SAFE["Practice-ground<br>psychological safety"]
  LEGAL & PSYCH --> SAFE
  VALUE -.respect.-> SAFE

Avoiding intrusion into values is, in effect, what guarantees psychological safety for the practice ground. That is how separation 3 operates in the AI context.

EM workload actually drops — separation produces distribution

“The three separations sound like even more work for the EM.” Another common objection. But trace the mechanism and separation is not added load — it’s load distribution.

SeparationWork that leaves the EMWhere it goes
Separation 1 (target)Individually coaching everyone on AI managementShared learning material since AI management is now everyone’s individual skill
Separation 2 (domain)Mental-health assessment, career redesign, long-term low-performance handlingOccupational health, HR, career consultants17
Separation 3 (layer)Intrusion into values and the responsibility for itReturned to the report’s own self-determination domain

Codifying escalation thresholds in separation 2 — “same issue raised in 1:1s for 3 weeks straight → loop in HR,” “performance stall over 6 months → discuss reassignment / PIP with HR,” etc.17 — is a mechanism for the organization to share territory the EM had been carrying alone, and structurally lowers load.

On top of that, once separation 1 makes AI management a shared organizational skill, the EM’s own AI-driven work (1:1 summaries, progress aggregation, doc maintenance) gets efficient inside the same frame. The context-organization that AI delegation requires becomes a team-wide asset rather than an individual EM’s9.

In the end, not doing the separations is what perpetuates the punishment game; doing them is the only exit. That was the core of the prior article, and the same conclusion holds in the AI practice-ground frame.

“We deployed the tools and nobody uses them” — confusing tool rollout with structuring the practice ground

“We rolled out Copilot and ChatGPT Enterprise to everyone, training is done. Only some people are using them.” Many organizations land here. From inside that situation, “structure your practice ground” can sound like noise — “we already set up the environment.”

But that very feeling is a signal that “providing tools” and “structuring the practice ground” are being conflated. When an organization invests in AI capability, structural support is better viewed in three layers.

flowchart TB
  L1["L1: Tool provision<br>Licenses, training, permission"]
  L2["L2: Workflow redesign<br>Ticket granularity, 1:1s, criteria"]
  L3["L3: Psychological safety<br>Failure tolerance, no values push"]
  L1 --> L2 --> L3
  RESULT["AI embedded in<br>daily work"]
  L3 --> RESULT

Most organizations stop at L1. Tool deployment is easy to ROI-measure, easy to get budget approved, and clear in terms of which department owns it (IT, HR, executives). But L1 alone is just a state of “you have permission to use” — the workflows and evaluation criteria are still pre-AI. That is what “we deployed it and nobody uses it” actually is.

Self-diagnosis: what’s missing in an L1-only org

Are you actually in L2? Some typical gaps to check.

  • Has ticket granularity been re-tuned to “what AI can chew on” (explicit context, acceptance criteria, related files)?
  • Do 1:1 agendas include “what task did you wish you could hand off to AI but couldn’t”?
  • Are code-review standards neutral about “is this AI-generated”? (No “handwritten = high marks, AI-generated = needs scrutiny” bias)
  • Is it explicit which parts of status reporting and doc maintenance can be left to AI?
  • Is “experimentation time” for AI trial-and-error carved out within paid work hours?

L3 is even harder to see.

  • Is there an atmosphere where “I asked AI and got a weird answer” can be shared as a funny story?
  • In evaluation conversations, are values pushes like “be more interested in AI” or “play with it after hours” being avoided?
  • Have EMs built the habit of diagnosing reports who aren’t using AI through the 5-factor matrix17 rather than the single axis “no motivation”?
  • Is responsibility for AI-generated outputs explicitly assigned (prompt designer, reviewer, final approver)?

If any of these are untouched, the organization has set up L1 only and skipped L2/L3 — exactly the tool-rollout-vs-structuring confusion.

When utilization still doesn’t pick up — re-apply the 5-factor matrix

If L1 through L3 are all in place and utilization still isn’t moving, the cause is outside environmental factors. Time to re-apply the 5-factor matrix17 in the AI context.

FactorWhat’s left after the environment is cleanPrescription
CapabilityIndividual differences in fit to AI collaboration (abstraction, verbalization, critical review)Fit-to-role — separate AI-assumed roles from in-hours-self-contained roles
SituationalMental health, life events, health problemsLoop in occupational health and HR (outside the EM’s judgment)
Career viewRational non-investment in AI fluency (tenure horizon, role design)Respect the choice — value contributions on a different axis
Motivation structureLong-run deficit of SDT’s three needsThe deep layers are beyond a solo EM — territory for outside coaching, organizational development specialists

The important point: don’t aim for “100% of employees use AI equally.” Given that even Stack Overflow Survey 2024 shows 32% of top-tier developers don’t write code as a hobby12, AI usage will show a similar distribution. Treat it as a fit-between-role-and-person problem.

The other side of “fit-to-role” — non-firing-by-default creates an “accommodation” dilemma

“Fit-to-role” should not be loaded with too much optimism, though. Inside Japan’s labor law structure, this is not free optimization but a constrained containment problem.

As the prior article discussed, Japan’s restrictions on dismissal of regular employment are in the relatively strong tier among OECD countries17, and in Bloomberg L.P. v. (employee) (Tokyo High Court, 24 April 2013) a dismissal for performance after three rounds of PIP was ruled invalid17. In the Shiga Council of Social Welfare case (Supreme Court, 26 April 2024), the Court held that for an employee with a job-type-limited agreement, reassignment without consent is not permitted17. The path “doesn’t use AI → fire them” is legally not open — that is the structural reality Japanese EMs face. (In jurisdictions with at-will employment the legal constraint is weaker, but the same dynamic resurfaces in different form: turnover costs, reputational risk, ADA/EEOC exposure for protected categories, and the simple fact that firing the bottom 20% does not magically produce the top.)

What this generates is the dilemma readers will already feel intuitively — “who does the standard accommodate, the AI users or the non-users?”

flowchart TB
  ADOPT["AI-adopter layer<br>Productivity gain"]
  NONADOPT["Non-adopter layer<br>Cannot be exited<br>(dismissal limits)"]
  STD["Whose standard does<br>evaluation align with?"]
  DRAG["Lower-bound standard<br>→ no upside for adopters"]
  CONFLICT["Upper-bound standard<br>→ unfair evaluation<br>and labor disputes"]
  ADOPT --> STD
  NONADOPT --> STD
  STD --> DRAG & CONFLICT

If you uniformly raise the work standard inside the same job grade and role definition under “AI is assumed,” the non-adopter layer takes consecutive low marks in evaluation conversations and labor risk accumulates from an organizational-justice angle17. If you instead anchor the standard low to accommodate non-adopters, the adopter layer ends up in a state where “effort doesn’t translate into impact” and the self-driving exception talent leaks out — the “exception-dependence fragility” from reason 4 reappears here.

The only realistic way to dissolve the dilemma is role differentiation

If you try to dissolve this dilemma inside one role, you always fail. The realistic route is to differentiate the roles themselves. The “real specialist track” from the prior article’s organization-design principle 417 takes on its decisive AI-era meaning here.

TrackNature of workEvaluation axisAI usage
AI-assumed trackOutputs centered on design, delegation, reviewOutput volume and quality per unit timeHigh (assumed)
In-hours-self-contained trackMaintenance, operations, QA, documentation, stable runStability, accuracy, continuityMedium-low (optional)

This track split is not “demoting people to a lower track.” As the Person-Job Fit research from the prior article shows17, the same person in a different role experiences large changes in job satisfaction, organizational commitment, and performance (ρ = .20–.56). People who find their strengths in the stable-run track exist, and that role is genuinely necessary for the organization. The crucial point is that compensation and status are designed fairly across tracks, with no implicit hierarchy of “AI-assumed track = superior.” Real implementation, as in Omron’s Fellow program, is needed17.

Even then, the dilemma is not fully dissolved. Track differentiation as a real institution takes time to set up. In the interim, EMs spend a stretch of time holding adopters and non-adopters inside the same role. Operationally, that period is neither “uniform pressure on everyone” nor “uniform laissez-faire on everyone.” It’s individual operation: appropriate challenge and discretion for adopters, clear expectations achievable in the current role for non-adopters. This is the natural extension of the observe / arrange / place verbs from the prior article’s Don’t try to change people EM playbook8 — and because dismissal is not an available option, this is the territory where interpersonal-skill maturity actually gets tested.

Three operational conclusions

Three practical conclusions follow.

First, before concluding “structural support doesn’t work,” diagnose honestly which of L1/L2/L3 you actually stopped at. Misreading “tool deployment = structuring complete” leaves separations 2 and 3 untouched while you slap a “the AI strategy failed” label on it.

Second, residual non-use after L1 through L3 are all done is the territory where misdiagnosis must be avoided. Don’t judge on the single axis of “no motivation”; cut through with the 5-factor matrix and, when factors outside the EM’s range come into view, hand off to specialists or HR without delay17.

Third, avoid both fantasies — “everyone will use it” and “we’ll just accommodate the non-users.” Within Japan’s labor law structure (and in other jurisdictions for their own reasons), exiting non-adopters is not a real option, but lowering the standard isn’t necessary either. Role differentiation — making the specialist track real — is the structural answer that lets adopters and non-adopters coexist fairly inside one organization. That is the practical AI-era meaning of organization-design principle 4 from the prior article17.

Linking to the individual: the moments a report stands on the side that gives AI instructions

When the EM puts the structure in place, what shows up on the report’s side. There are three observable transition points.

Transition 1: “Let me ask AI first” comes out of their mouth automatically

The first sign is a change in reflex when reports surface a problem. When a report says “Hey, what should we do about this?” and the EM consistently replies “First have AI organize it and show me the output,” reports gradually shift their own default — AI consultation becomes the first option in their reflex.

This is not mindset coercion but a behavior change driven by workflow design. The 25-year EM’s framing — “don’t wait for tasks; find the work yourself”1 — typically begins with consulting AI as the entrance. AI doesn’t rush you to a decision and will go along with you, which makes it the ideal training partner for the skill of “asking a question starting from the vague edge.”

Transition 2: While writing the prompt, they notice the holes in their own understanding

The second transition is when reports start reporting that “trying to verbalize what to tell AI made me realize what I didn’t actually understand myself.” As discussed in Writing with AI — a guide to verbalizing the vague, this is one of the largest by-products of working with AI.

This realization is the substantive mechanism by which instruction-waiting breaks down. “I don’t know what I should do” is often a state of “I can’t yet put into words what I don’t know.” Giving AI instructions creates a forced verbalization moment inside ordinary work — a structural prompt rather than personal pressure.

Transition 3: They start critically reviewing AI’s output

The third and final transition is when a report stops swallowing AI output whole and starts reviewing it critically. As Dell’Acqua and colleagues warn, blind dependence on AI in domains it’s bad at lowers performance10. The moment a report shifts from “AI said it, so it’s right” to “how do I verify AI’s output?” — that is the moment they have fully crossed onto the side that gives AI instructions.

This shift means the report has begun to carry their own intuitive sense of TRM-style evaluation — what the model can reliably do and what it can’t14. It’s the process of management theory’s core skill taking root as an individual skill in everyone, regardless of management track.

When the EM has no bandwidth — start with escalation

Everything above assumes the EM has authority and bandwidth to introduce the three separations. In reality, EMs in the middle of the punishment game often have to make the case for the three separations upward themselves.

In that case, the starting move is for the EM to escalate the prior article’s structural diagnosis upward. Not “I want the three separations because it’ll lighten my load,” but “we need the three separations to structurally raise the organization’s AI capability.” Translate the frame into organizational AI ROI.

That frame lands with executives. The point that AI investment “amplifies in healthy organizations and worsens in dysfunctional ones”9 means that introducing AI without the three separations is wasted investment. The 40% quality-improvement result from Dell’Acqua and colleagues10 only materializes in environments where context is organized. Frame this not as “the EM is suffering” but as “the organization’s AI-transformation strategy” — that translation is the first step toward escalation traction.

And as detailed in the related Don’t try to change people EM playbook, in the escalation conversation too the verb “change the upper layer” is poor cost-benefit both legally and psychologically. Stick to the support-side verbs — observe, record, communicate, connect, arrange — and play the role of making the current state visible and organizing decision material8. What moves is the upper layer’s judgment; what the EM controls ends at the quality of information provided.

Conclusion — the moment EMs see the structure, the organization’s AI capability starts moving

Four points.

  1. AI capability is the goal; breaking instruction-waiting is just one means. Move the “thing to fix” from the individual’s mindset to the absence of an organizational practice ground. Mindset alone won’t start the engine — without organized context and psychological safety, the practice of giving AI instructions never even ignites910. People who use AI without a venue do exist, but they are exceptions and cannot serve as the load-bearing pillar of an AI strategy.
  2. The 3 separations are the structure of the practice ground. Separation 1 (target) makes AI management an individual skill for everyone; separation 2 (domain) embeds hands-on AI delegation inside business management; separation 3 (layer) avoids values intrusion to preserve psychological safety. Tool provision (L1) alone is not a practice ground — it functions only when workflow redesign (L2) and psychological safety (L3) are in place.
  3. Separation is load distribution, not load increase. Territory the EM had been carrying alone is shared organizationally, and the EM’s own AI work gets efficient as a by-product. What perpetuates the punishment game is the act of not doing the separations.
  4. The “accommodate non-users” dilemma is dissolved by role differentiation. Within Japan’s labor law structure, exiting the non-adopter layer is not an option, but anchoring the standard low isn’t necessary either. Trying to hold adopters and non-adopters in the same role breaks the organization — making the specialist track real (the prior article’s organization-design principle 417) is the structural answer that lets both coexist fairly inside one organization.

Putting reports on the side that gives AI instructions is not done by issuing slogans. It happens through the cumulative effect of small structural changes inside ordinary work: how questions are asked in 1:1s, the granularity of Jira tickets, how AI tooling is operationally deployed, the three-layer discipline in evaluation conversations. These promote natural behavior change rather than coerce it. The shift from “triggered situational interest” to “individual interest” in Hidi & Renninger’s interest-development model5 and SDT’s intrinsic-motivation research6 only ignites through this kind of structural support.

The actual goal is for people to become capable of using AI. Breaking instruction-waiting is a waypoint. The moment an EM recognizes the structural problem, the organization’s AI capability begins to move — the EM takes responsibility for structure, and returns the report’s choices to the report. Redrawing that boundary is the first job of an EM in the AI era.

Other articles related to this theme:

References

References numbered to match in-text citations.

Other references (not numbered in body)

  • The job demands-resources model of burnout — Demerouti, Bakker, Nachreiner & Schaufeli, Journal of Applied Psychology, 86(3), pp. 499–512 (2001, DOI: 10.1037/0021-9010.86.3.499). 【Reliability: High (peer-reviewed)】
  1. 25年エンジニアマネージャーが見た、AI時代に「生き残る」ために必要な3つの力 — necotake, Qiita (13 January 2026). 【Reliability: Medium (practitioner essay based on 25 years of experience)】 ↩︎ ↩︎2 ↩︎3 ↩︎4

  2. 9割以上の部長がプレイングマネジャーという実情 — HR Pro / Sanno Institute of Management survey (conducted 2019, published 2020; subjects: department heads at listed companies with 100+ employees). 【Reliability: Medium-High】 ↩︎

  3. 罰ゲーム化する管理職―「強いミドル」は復活するのか — Yuji Kobayashi, Persol Research Institute (15 December 2025). 【Reliability: High】 ↩︎ ↩︎2

  4. State of the Global Workplace 2024 — Gallup (2024 edition). Japan engagement 6%, East Asia average 18%, world average 23%. 【Reliability: High (official international survey)】 ↩︎ ↩︎2

  5. The Four-Phase Model of Interest Development — Hidi & Renninger, Educational Psychologist, 41(2), pp. 111–127 (2006, DOI: 10.1207/s15326985ep4102_4). 【Reliability: High (peer-reviewed)】 ↩︎ ↩︎2 ↩︎3 ↩︎4

  6. Self-Determination Theory and the Facilitation of Intrinsic Motivation, Social Development, and Well-Being — Ryan & Deci, American Psychologist, Vol. 55(1), pp. 68–78 (2000). Related: Self-determination theory and work motivation — Gagné & Deci, Journal of Organizational Behavior, 26(4), pp. 331–362 (2005, DOI: 10.1002/job.322). 【Reliability: High (heavily cited peer-reviewed work)】 ↩︎ ↩︎2 ↩︎3 ↩︎4

  7. Learned helplessness at fifty: Insights from neuroscience — Maier & Seligman, Psychological Review, 123(4), pp. 349–367 (2016, DOI: 10.1037/rev0000033). 【Reliability: High (peer-reviewed)】 ↩︎ ↩︎2

  8. The “don’t try to change people” EM playbook — Do’s and Don’ts for supporting six types of subordinates in Japanese IT — this blog (28 April 2026). Catalog of legally safe verbs. 【Reliability: Internal reference】 ↩︎ ↩︎2 ↩︎3 ↩︎4

  9. 生成AI時代のチーム設計 ― 役割と協働の再構築 — Ryutaro Yoshiba (29 November 2025). 【Reliability: Medium-High (industry-practitioner synthesis with cited research)】 ↩︎ ↩︎2 ↩︎3 ↩︎4 ↩︎5 ↩︎6

  10. Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of Artificial Intelligence on Knowledge Worker Productivity and Quality — Dell’Acqua, McFowland, Mollick et al., Harvard Business School Working Paper No. 24-013 (2023). Field experiment with 758 BCG consultants and 18 tasks. 【Reliability: High (large-scale field experiment)】 ↩︎ ↩︎2 ↩︎3 ↩︎4 ↩︎5

  11. Psychological Safety and Learning Behavior in Work Teams — Edmondson, Administrative Science Quarterly, 44(2), pp. 350–383 (1999, DOI: 10.2307/2666999). 【Reliability: High (classical peer-reviewed work)】 ↩︎

  12. Why packing all of management onto one person turns the role into a punishment game — this blog (27 April 2026). Filter-at-hiring vs. fit-to-role-for-existing analysis. 【Reliability: Internal reference】 ↩︎ ↩︎2 ↩︎3 ↩︎4 ↩︎5 ↩︎6

  13. The Manager’s Path — Camille Fournier, O’Reilly Media (2017). 【Reliability: High (industry-standard text)】 ↩︎

  14. High Output Management — Andrew S. Grove, Random House (1983, revised editions). The Task-Relevant Maturity (TRM) concept. 【Reliability: High (classical management text)】 ↩︎ ↩︎2

  15. Why packing all of management onto one person turns the role into a punishment game — this blog (27 April 2026). The “AI management as individual skill for all; people management as specialist domain only for those who want it” framing. 【Reliability: Internal reference】 ↩︎

  16. Promotions and the Peter Principle — Benson, Li & Shue, The Quarterly Journal of Economics, 134(4), pp. 2085–2134 (2019, DOI: 10.1093/qje/qjz022). Data on 38,843 sales workers across 131 U.S. firms. 【Reliability: High (peer-reviewed)】 ↩︎

  17. Why packing all of management onto one person turns the role into a punishment game — this blog (27 April 2026). The 5-factor matrix, domain separation, and three-layer framework. (For non-Japan readers: the Bloomberg L.P. and Shiga Council of Social Welfare cases referenced are Japanese precedents establishing strong dismissal protections and limits on involuntary reassignment. The “membership-based employment” model is the Japanese norm of hiring into the company rather than a specific job — so role redefinition rather than dismissal is the available lever.) 【Reliability: Internal reference】 ↩︎ ↩︎2 ↩︎3 ↩︎4 ↩︎5 ↩︎6 ↩︎7 ↩︎8 ↩︎9 ↩︎10 ↩︎11 ↩︎12 ↩︎13 ↩︎14 ↩︎15 ↩︎16 ↩︎17 ↩︎18

  18. Labor Standards Act, Articles 32 and 37 — e-Gov legal search (Japan). Provisions on working-time management. 【Reliability: High (statute)】 ↩︎

  19. エンジニアは業務時間外でも勉強するべきなのか — Ayumu Yonemura, Axia Inc. (18 July 2017). Management-perspective essay on off-hours study. 【Reliability: Medium (practitioner essay)】 ↩︎

This post is licensed under CC BY 4.0 by the author.