Reclaim Writing, Level Up Your AI Prompting: A 3-Stage Roadmap for Workplace Text
This article was generated by AI. The accuracy of the content is not guaranteed, and we accept no responsibility for any damages resulting from use of this article. By continuing to read, you agree to the Terms of Use.
- Target audience: IT engineers and knowledge workers who want to improve workplace text quality and AI prompting skills simultaneously
- Prerequisites: Basic experience with conversational AI tools (ChatGPT, Claude, etc.)
- Reading time: 25 minutes
Overview
Chat replies become a single emoji reaction. Slack messages collapse to “Got it.” Anything complicated turns into “let’s just hop on a call.” Reports get handed off to AI. Before long, the act of constructing sentences in your own words has quietly disappeared from your workday. The problem isn’t that “your writing has gotten worse” — it’s something more fundamental: you’ve stopped writing in the first place.
But rushing into “Starting tomorrow, I’ll structure every single message with PREP” is counterproductive. The pressure of always producing polished prose actually accelerates the not-writing spiral.
What this article proposes instead is a sustainable 3-stage roadmap.
- Stage 1: Reclaim writing itself — Replace one emoji reaction with one sentence. Replace one quick call with three lines. Casual style is fine. Emojis are fine. Just bringing back the frequency reverses the negative spiral
- Stage 2: Gradually introduce structure — Once writing feels natural again, apply the PREP method (Point → Reason → Example → Point) to logical workplace text like requests and reports. The same “bottom line up front” principle the U.S. military standardized as BLUF eliminates follow-up questions
- Stage 3: Build in AI review — Close the loop by sending your PREP draft to a generative AI for a 30-second review. Automated writing feedback research has confirmed effect sizes of g = 0.55 overall and g = 0.65 on transfer tasks1
Once you reach Stage 3, a side effect kicks in: your ability to give AI clear, effective instructions (prompt engineering) develops naturally. The very act of specifying evaluation criteria and output format to an AI is practice in what prompt engineering research calls “Critical Online Reasoning” — a metacognitive skill2.
In other words, this single roadmap simultaneously develops three capabilities — communication clarity, structured thinking, and AI fluency — through one habit. The first step is just this: “Tomorrow, take one chat you’d normally answer with an emoji, and write it in your own words.”
This article will (1) confirm with data why “not writing” is the real root cause, (2)–(4) detail each stage, (5) provide workplace-text-specific templates, (6) show a one-week roadmap from Stage 1 to Stage 3, and (7) respond to common objections.
1. Root cause: It’s not that you got worse — you stopped writing
1.1 The workplace is full of “ways to avoid writing”
Look closely at modern workdays and you’ll find endless mechanisms designed to spare you from composing sentences.
- Emojis, stickers, reactions: A single thumbs-up closes the loop on any chat
- Ultra-short canned phrases: “Got it,” “Thanks!”, “Sounds good,” “Will do”
- Voice, video, screen sharing: “Easier to explain on a quick call,” “Let me share my screen”
- Recycled templates: Emails and reports become “last time’s version with the numbers swapped”
- AI ghostwriting: One prompt generates a full report, email, or proposal
- Loom-style screen recordings: Hit record, send the link, no prose needed
From a productivity standpoint, none of these are wrong in isolation. Using emojis in casual chat is appropriate. Switching to a call for a complex technical debate is rational. The problem isn’t any individual tool — it’s that, taken together, the cognitively demanding act of constructing your own sentences has been almost entirely engineered out of your workday. Try counting, over the course of a week, how many times you wrote three or more sentences that you actually composed in your head. For many knowledge workers, that count is approaching zero.
So this article isn’t telling you to abandon emojis or never get on a call. The proposal is much smaller: reclaim just one situation per day where you could have written.
1.2 Expertise research: skills you don’t use quietly atrophy
This matters because writing is a sophisticated cognitive skill that requires deliberate practice. Ericsson’s body of expertise research3 has consistently shown that skills are maintained and improved only through continued use. Conversely, once practice opportunities disappear, ability quietly decays.
The crucial point: you usually don’t consciously feel yourself getting worse. People don’t directly perceive skill decline — what they feel is the strange friction of trying to write a long message after a long absence, or the after-the-fact realization of “wait, that didn’t land.” The very fact that you’re writing less means you’re also encountering fewer signals that you’ve gotten worse.
1.3 Overconfidence bias accelerates the negative spiral
Worse, human cognition has a fundamental bias built in. Kruger (NYU) and Epley (University of Chicago), in a classic 2005 paper in Journal of Personality and Social Psychology4, demonstrated that email senders overestimate their tone-conveying ability by roughly 20 percentage points.
Representative results from tone-transmission tasks (sarcasm, sincerity, humor):
- Voice communication: actual accuracy ~75%, sender prediction ~78% → roughly aligned
- Email communication: actual accuracy ~56%, sender prediction ~78% → large gap
The cause is egocentrism bias4. When you reread your own writing, you read it through the lens of what you intended — making it nearly impossible to imagine how a recipient who lacks that intent will interpret it.
This bias exists independent of writing frequency. That’s exactly why writing less makes the rare things you do write land even worse — a negative spiral:
- Don’t write → writing muscles atrophy
- Occasionally write → egocentrism bias makes it land even worse
- “See, text just doesn’t communicate” → retreat further into emojis, calls, AI ghostwriting
- Back to step 1
1.4 The same pattern shows up in survey data
Independent survey data points the same direction. According to a March 2024 survey by the keyboard app Simeji (n = 3,516, Gen Z and adults 25+), reported by Steenz, 57.7% of respondents said they “often” or “sometimes” feel their meaning fails to come through in text — about 1.9× the rate for voice (30.2%)5. The most common cited reason: “emotion and nuance are hard to convey,” consistent with Kruger & Epley’s experimental findings.
Notably, this data captures a preference away from text tools and toward voice. It may be observing the not-writing spiral itself in motion.
1.5 Which is why we reclaim it in three stages
Reversing the spiral requires getting writing frequency back. And not by aiming for perfect structure on day one — first, restore the act of writing to your workday. Build from there.
The next chapters lay out the roadmap: Stage 1 (just write) → Stage 2 (add structure) → Stage 3 (AI review) → and as a byproduct, Stage 4 (prompting skill).
2. Stage 1: Reclaim “writing itself” (any format welcome)
2.1 All you need is “compose one sentence in your own words”
The first stage is dead simple. Once a day, replace one situation you’d normally close with an emoji, a call, or AI-ghostwritten text with one where you compose the words yourself.
Tone doesn’t matter. Casual is fine. Emojis are welcome. It doesn’t need to be long. The only thing you’re being asked to do is perform the cognitive act of building a sentence in your own head once a day.
Concrete examples:
| What you usually do | Stage 1 replacement |
|---|---|
| 👍 reaction | “Thanks so much for handling this — really helpful!” |
| “Got it” | “Got it! Will handle today and share results by EOD.” |
| “Let’s hop on a call” | “Let me try writing it out first — three main points: …” |
| “Hey AI, summarize this meeting” | Try writing a 3-line summary yourself first |
2.2 What you gain at Stage 1
This alone starts reversing the negative spiral. The core of Ericsson’s deliberate practice framework3 is “securing practice opportunities” — and frequency matters more than format correctness, especially at the start.
Concrete benefits:
- Psychological resistance to writing fades — the act itself returns to your daily routine
- Vocabulary starts moving again — phrases you’d stopped using resurface
- Your thinking gets verbalized — writing increases the resolution of what you actually think
At this point, you don’t need AI review or PREP yet. Just write. Do this for 3–5 days, and once writing feels less effortful, move on.
2.3 Casual style isn’t a “downgrade”
You don’t need to feel guilty about casual prose. For empathy messages and small talk, casual writing is the right choice for the context.
The problem isn’t formality — it’s substituting a single emoji where your own words could have appeared at all. Even one casual sentence, if you wrote it yourself, keeps your writing muscles intact.
3. Stage 2: Gradually try “structured” writing (PREP / BLUF)
3.1 When structure helps and when it doesn’t
Once writing feels less daunting, introduce structure only for workplace text that requires logical communication. Not every message needs structure.
| Type | Approximate share | Recommended format |
|---|---|---|
| Logical communication (requests, reports, proposals) | ~80% | Structured (PREP / BLUF) |
| Empathy, small talk, casual | ~20% | Casual prose in your own words |
Stage 2 is about applying structure only to the 80%.
3.2 The four elements of PREP
PREP is a transmission framework with four components67.
| Order | Element | Role |
|---|---|---|
| 1 | Point | State the conclusion / main point first |
| 2 | Reason | Explain why it holds |
| 3 | Example | Concrete examples or data backing the reason |
| 4 | Point | Restate the conclusion to close |
3.3 The military version: BLUF (Bottom Line Up Front)
The “lead with your conclusion” principle isn’t confined to business media. The U.S. military standardized the same idea as BLUF (Bottom Line Up Front), used across the Navy, Marine Corps, Army, and Air Force8. BLUF aims to enable busy, time-constrained, information-saturated recipients to make decisions faster8.
PREP and BLUF differ in detail but share the same core: the conclusion goes first. The fact that two independent domains — business and military — converged on this principle speaks to its robustness under time pressure.
3.4 Before/after (a status report)
Adapting the canonical example from makefri.jp7:
Before (no PREP): A team member starts with context. “So this morning a customer reached out about a complaint, and I had to drop everything to handle it, which meant…” The manager interrupts: “Wait, what about the meeting handout — did you print it?”
After (PREP applied): “I haven’t started printing the meeting handout yet (P). I prioritized a customer complaint instead (R). Specifically, Company A reached out at 10am and I committed to a callback at 11 (E). I’ll definitely have the printing done before the afternoon meeting (P).” The manager grasps the full picture in one read.
Roughly the same length — but just changing the arrival order of information eliminates the follow-up questions. You’re closing off the “interpretive ambiguity” Kruger & Epley identified, on the structural side.
3.5 What you gain at Stage 2
- Fewer follow-up questions — managers, peers, and customers stop asking “so what’s the conclusion?”
- Faster decisions — recipients grasp the whole picture from the opening, accelerating judgment
- Less time spent writing — having a template reduces hesitation and compresses composition time
By this point, resistance to writing has nearly vanished, and structure starts running automatically in your head. Stage 3 brings AI in as your evaluator.
4. Stage 3: Build in AI review (the 30-second closed loop)
4.1 Flow of the closed loop
flowchart TB
A[Task arises] --> B[Draft using<br>PREP structure]
B --> C[Ask AI to review<br>with clear criteria]
C --> D{Issues found?}
D -->|Yes| E[Revise yourself<br>based on feedback]
D -->|No| F[Final tone tuning<br>emotion / respect / emoji]
E --> F
F --> G[Send]
G --> H[Apply learnings<br>to next text]
H --> A
The key is never let AI rewrite for you. Cast AI as the evaluator (reviewer); you do the rewriting.
4.2 AI feedback’s effect is confirmed by meta-analysis
The natural skepticism — “does AI review actually grow your writing skill?” — gets a clear answer from accumulated education research on Automated Writing Evaluation (AWE).
Fleckenstein and colleagues, in a 2023 Frontiers in Artificial Intelligence paper, conducted a 3-level meta-analysis of 20 studies, 84 effect sizes, and 2,828 learners1. Key findings:
| Condition | Effect size g | Interpretation |
|---|---|---|
| Overall effect | 0.55 | Medium–large |
| L2 learners | 0.72 | Large |
| L1 learners | 0.40 | Medium |
| Long-term intervention (≥2 sessions) | 0.66 | Large |
| Short-term intervention (1–2 sessions) | 0.18 | Small |
| Transfer tasks (writing in new contexts) | 0.65 | Large, significant |
The standout finding is g = 0.65 on transfer tasks1. Learners who received AI feedback improved not just on the texts that received feedback, but on entirely new texts. That’s decisive evidence the writer’s own skill is growing — not just that AI is patching their draft.
A separate meta-analysis (Zhai & Ma 2023, Journal of Educational Computing Research) covering 26 studies and 2,468 participants reported an overall effect size of g = 0.8619.
A caveat: these meta-analyses focus on educational settings (especially second-language learning), and transfer to working professionals’ workplace text isn’t strictly the same question. Still, the underlying mechanism — “immediate feedback × repeated practice”3 — rests on cognitive characteristics that are common to both contexts, so the direction of the effect should generalize. Take the qualitative finding (“receiving feedback and rewriting yourself grows the writer’s own skill”) more seriously than the exact effect-size numbers.
4.3 Strengths and limits of ChatGPT feedback
Research focused specifically on LLMs exists too. Steiss and colleagues, in a 2024 Learning and Instruction study10, compared the quality of human and ChatGPT feedback.
- Human feedback rated higher on quality — 4 out of 5 dimensions favored humans
- But the gap was modest, and ChatGPT’s time efficiency makes its practical value high
- ChatGPT excels at surface features (mechanics, grammar, tone, formatting)
- For content and higher-order reasoning, ChatGPT’s value is more limited
Most workplace text aims to “convey existing facts, requests, or status clearly,” so the surface-feature improvements ChatGPT delivers cover a large share of practical needs. Substantive content remains the domain of the writer’s own expertise.
4.4 What you gain at Stage 3
- Third-party-perspective feedback in 30 seconds — what used to take hours or days waiting for a manager review arrives instantly
- Deliberate-practice’s three requirements (goal, criteria, immediate feedback) are all in place3 — daily writing turns into actual practice
- Egocentrism bias gets mechanically corrected before sending4 — ambiguities invisible to the writer get flagged
5. Stage 4: As a byproduct, your “AI prompting skill” grows
5.1 Review-request prompts are prompt literacy
If you keep doing Stage 3, a fourth skill grows almost automatically — prompt engineering ability.
In contemporary prompt engineering research, the trend is to treat PE not as a tactical trick but as 21st-century literacy. Federiakin and colleagues, in a 2024 Frontiers in Education paper2, organized PE into four components:
- Understanding basic prompt structure: ability to include instruction, context, input data, and output indicator
- Prompt literacy: avoiding ambiguity, providing appropriate context, balancing complexity
- Prompting methods: Few-shot, Chain-of-Thought, etc.
- Critical Online Reasoning (COR): the metacognitive skill of evaluating LLM output quality and judging the need for further refinement
A review-request prompt exercises all four. Compare these two:
- ❌ Weak prompt: “What do you think of this email?”
- ✅ Strong prompt: “Please review the following email using the PREP framework. Evaluate on four criteria: (1) Is the Point in the opening? (2) Is the Reason concrete? (3) Does the Example contain numbers or proper nouns? (4) Does the closing Point include a call to action? For each criterion, return OK or Needs Improvement plus a one-line note. If anything is missing, identify the location and suggest an improvement direction. Do not rewrite the email.”
5.2 Workplace text review is daily prompt practice
Practicing the latter style, one workplace message at a time, naturally builds:
- Explicit evaluation criteria (what to look at)
- Specified output format (OK / Needs Improvement + one-line comment)
- Bounded task scope (don’t rewrite)
— the core elements of prompt literacy, embedded in your day job. No separate “prompt engineering course” required. One review request a day is one prompt-practice session a day.
5.3 What you gain at Stage 4
- You can ask AI for anything and get on-target output — coding help, research, summaries — quality improves across the board
- Metacognition develops2 — the habit of articulating “what am I trying to judge here?”
- AI’s cost-effectiveness goes up dramatically — same model, vastly more value extracted
6. Templates by workplace-text type (for Stages 2–3)
Here are concrete templates you can copy-paste and try in Stages 2–3.
6.1 Business chat (Slack / Teams)
PREP template:
1
2
3
4
[P] Re: <topic>, the bottom line is <conclusion>.
[R] The reason is <why>.
[E] Specifically, <numbers / dates / proper nouns>.
[P] So I'd like you to <action>, or: my plan is <next step>.
AI review prompt:
1
2
3
4
5
6
7
8
9
10
11
12
13
Please review the following chat message using PREP, in 30 seconds or less.
[Evaluation criteria]
1. Can the conclusion be grasped from the opening sentence?
2. Is the reason concrete (not vague phrases like "just to be safe" or "things are busy")?
3. Does the example include numbers, dates, or proper nouns?
4. Is the recipient's action (what you want them to do) clear?
[Output format]
For each criterion: "OK" or "Needs improvement" + a one-line note. Do not rewrite.
---
(paste draft here)
6.2 Business email
PREP template (external):
1
2
3
4
5
6
7
8
9
10
11
12
13
Subject: [Request] Need your input on <topic> by Apr 30
Hi <Name>,
Hope you're doing well.
[P] I'm writing to request your response on <topic> by April 30.
[R] The reason: our <project> schedule depends on this date.
[E] Specifically, our kickoff is scheduled for May 1, and we need <X> finalized before then.
[P] I'd really appreciate your reply by 5pm on April 30.
Thanks for your time,
<Name>
AI review prompt:
1
2
3
4
5
6
7
8
9
10
11
12
13
Please review the following business email.
[Evaluation criteria]
1. Is the PREP structure working (especially: do the opening Point and closing Point align)?
2. Does the subject line convey "what, by when, who"?
3. Is the request's deadline and granularity concrete?
4. Are there any over- or under-formal phrasings, or redundant expressions?
[Output format]
For each criterion: a verdict and 1–2 sentences of feedback. Do not propose a rewrite.
---
(paste draft here)
Using generative AI to produce business-document templates or improve drafts has been widely promoted via published prompt collections by business11. This method narrows that pattern down to “ask only for review of your own draft,” preserving writing muscles while still benefiting from AI’s immediacy.
6.3 Status reports / weekly updates
PREP template:
1
2
3
4
5
[Weekly summary (Apr 22–Apr 28)]
[P] This week's bottom line: Feature A shipped on schedule; Initiative B missed its KPI.
[R] Why: A passed validation with no critical bugs. B's target segment responded weaker than expected.
[E] Specifics: A's post-launch error rate is 0.2% (target: <0.5%). B's actual CTR is 1.1% vs. 3% target.
[P] Next week's actions: continue monitoring A; redesign B with the hypothesis "wrong segment."
AI review prompt:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
Please review the following weekly report from a manager's perspective.
[Evaluation criteria]
1. Can a manager judge "on track / needs intervention" from the opening Point alone?
2. Do Reason and Example back up "why this is true" with data?
3. Are next week's actions written with verbs (what to do) and dates (by when)?
4. Is the granularity sufficient that the manager doesn't need to ask follow-up questions?
[Output format]
For each criterion: "OK / Needs improvement" + a one-sentence reason. Do not rewrite.
At the end, list exactly 3 follow-up questions a manager would likely ask.
---
(paste draft here)
That last instruction — “list the follow-up questions a manager would ask” — is a deliberate device that forces AI to take the reader’s perspective on your behalf, mechanically correcting for the egocentrism bias Kruger & Epley identified4.
7. A one-week roadmap from Stage 1 to Stage 3
7.1 Don’t aim for perfection — climb the stages
Even in the AWE meta-analysis, long-term intervention (g = 0.66) far outperformed short-term (g = 0.18)1. Don’t try to overhaul everything from day one — climb one step at a time.
| Day | Stage | Action |
|---|---|---|
| Day 1 | Stage 1 | Take one chat you’d normally answer with an emoji and write it in your own words (any tone) |
| Day 2 | Stage 1 | Take a topic you’d normally say “let’s hop on a call” about and force yourself to write it out |
| Day 3 | Stage 1 | Take one email you’d normally have AI write and draft it yourself |
| Day 4 | Stage 2 | Try the PREP template on a status-type message (daily log, weekly update) |
| Day 5 | Stage 2–3 | Send a Stage-2 draft to AI for review |
| Day 6 | Stage 3 | Consciously do the “rewrite it yourself” step after the review |
| Day 7 | Stage 3–4 | Rewrite the prompt itself in your own words instead of pasting the template |
The key: don’t try to do this on every text. Even one a day adds up to seven deliberate-practice sessions in a week. Just replacing seven situations a week — ones you’d otherwise close with emojis, calls, or AI ghostwriting — is enough to bring back the writing opportunities that had been disappearing. That’s exactly what the deliberate-practice research3 points to.
7.2 Tips for keeping it under 30 seconds
- Keep your AI chat window open at all times (pinned tab or desktop app)
- Snippet-ize the review prompt (text-expansion tool, editor snippet, phone keyboard shortcut)
- Finish revision in one pass — don’t try to address every AI note; pick the 1–2 most important and apply only those
8. Responding to common objections
8.1 “Using AI strips out nuance and humanity”
This method is designed not to let AI ghostwrite. AI is the evaluator; the final tone (respect, gratitude, emojis, the personal touch) always comes from a human. Steiss and colleagues’ research10 reports that ChatGPT feedback is strong on surface features but limited for content and higher-order reasoning, leaving emotional nuance and relationship-aware tone tuning as the human’s domain.
Concrete practice: after AI review, always layer in micro-adjustments like “add ‘I really appreciate your time’ at the end” or “open with one sentence touching on what they shared earlier this week.” If anything, having AI handle structure frees up cognitive bandwidth for the humanity layer.
8.2 “PREP is stiff and feels like overhead every single time”
You’ll feel that for the first three days. But once you start using the templates above as copy-paste, by day four the pattern is in your head and your thinking starts running in PREP order without the template. As the U.S. military’s standardization of BLUF demonstrates8, leading with the conclusion is a rational structure for accelerating busy recipients’ decisions — not a posture of formality.
8.3 “It won’t fit my team’s text culture”
This method is complete at the individual level. You don’t need to change team conventions. If anything, once your text becomes clearer, people start saying “their reports are easy to read,” and the practice diffuses naturally. No consensus-building needed; results do the persuading.
8.4 “Relying on AI will erode my own writing ability”
That concern is valid if you use AI as a ghostwriter. But this method limits AI to evaluator. As confirmed by the AWE meta-analysis1, learners who received feedback showed a large effect (g = 0.65) on transfer tasks (writing in new contexts) — direct evidence the writer’s own skill grew. That’s reinforcement of the feedback loop, not dependency.
8.5 “The pressure to always write polished prose is exhausting”
That feeling is correct. Doing PREP on everything is overreaction — and it can actively backfire by pushing you back toward “not writing.” This method targets only logical workplace text (~80%); small talk, empathy messages, and casual chat should stay casual.
The important variable isn’t “did I use PREP?” — it’s “did I write it in my own words?” A casual sentence like “Great launch! Smoother than expected 🙌” — written instead of dropped as a single emoji — keeps your writing muscles intact. Reclaiming writing frequency is the primary goal; structure is just a sub-rule that applies to 80% of cases.
Summary
Climbing through this roadmap, you end up developing four capabilities — through a single habit.
| Stage | Action | Capability gained |
|---|---|---|
| Stage 1 | Write one situation a day in your own words | Loss of psychological resistance to writing; vocabulary returns |
| Stage 2 | Apply PREP / BLUF to logical workplace text | Fewer follow-ups, faster decisions, less time per message |
| Stage 3 | Send drafts to AI for 30-second review | Instant third-party feedback; egocentrism bias corrected |
| Stage 4 (byproduct) | Keep writing review requests with explicit criteria | Core prompt literacy; metacognitive growth |
Secondary outcomes:
- Managers stop asking “so what’s the conclusion?”
- Reply rates on customer proposal emails go up
- Whatever you ask AI for, you get on-target output
- Quality improves across other AI use cases — meeting summaries, technical research, coding assistance
- People around the office start saying “their writing is easy to read”
And all of this starts from one step: “Tomorrow, take one chat you’d normally answer with an emoji, and write it in your own words.” Don’t aim for perfection — start at Stage 1. Within a week you’re at Stage 3; within a month the Stage 4 effects start showing.
Related articles
For more on related themes:
- A Blog-Writing Guide for Engineers Who Struggle to Verbalize — basic posture for using AI dialogue to organize thinking
- A Complete Guide to Chad Thiele’s “55 Prompting Strategies” — a systematic repertoire of prompting strategies
- The Expert Who Doesn’t Write Prompts: Meta-Prompting — the next stage where AI writes the prompts
- The Cost Structure of Evidence-Based Writing — distinguishing what AI can and cannot shorten
References
References are listed in the order of citation numbers used in the text.
Automated feedback and writing: a multi-level meta-analysis of effects on students’ performance - Fleckenstein, J., Liebenow, L. W., & Meyer, J. (2023). Frontiers in Artificial Intelligence. A 3-level meta-analysis of 20 studies, 84 effect sizes, and 2,828 learners. Overall effect size of automated writing feedback: g = 0.55; long-term intervention: g = 0.66; transfer tasks: g = 0.65. 【Reliability: High】(Peer-reviewed meta-analysis) ↩︎ ↩︎2 ↩︎3 ↩︎4 ↩︎5
Prompt engineering as a new 21st century skill - Federiakin, D., Molerov, D., Zlatkin-Troitschanskaia, O., & Maur, A. (2024). Frontiers in Education. Positions prompt engineering as a 21st-century literacy and proposes a four-component competency framework: (1) basic structural understanding, (2) prompt literacy, (3) prompting methods, (4) Critical Online Reasoning. 【Reliability: High】(Peer-reviewed academic paper) ↩︎ ↩︎2 ↩︎3
Deliberate practice and acquisition of expert performance: A general overview - Ericsson, K. A. (2008). Academic Emergency Medicine, 15(11), 988–994. Identifies the components of deliberate practice as: explicit learning goals, measurable performance criteria, immediate feedback, and activity design by a teacher/coach. Positions immediate feedback as “the most important task characteristic explaining differences in expert performance.” 【Reliability: High】(Landmark review in expertise research) ↩︎ ↩︎2 ↩︎3 ↩︎4 ↩︎5
Egocentrism over e-mail: Can we communicate as well as we think? - Kruger, J., Epley, N., Parker, J., & Ng, Z. W. (2005). Journal of Personality and Social Psychology, 89(5), 925–936. Across five experiments, demonstrates that email senders overestimate their ability to convey intended tone (sarcasm, sincerity, humor). As a representative figure, on tone-conveying tasks the actual email accuracy was about 56%, while senders predicted about 78%. Cause: egocentrism bias. 【Reliability: High】(Peer-reviewed academic paper, classic with extensive citations) ↩︎ ↩︎2 ↩︎3 ↩︎4
More than half have experienced their meaning failing to come through: the difficulty of text communication - Steenz Editorial (2024). News coverage of a survey conducted by the keyboard app Simeji from March 7–29, 2024 (n = 3,516, Gen Z and adults 25+). 57.7% reported that their meaning “often” or “sometimes” fails to come through in text, vs. 30.2% for voice. 【Reliability: Medium】(Primary data is the app operator’s in-house survey; Steenz is the secondary reporting source) ↩︎
What is the PREP method? Practice methods for clear writing, with examples - Chatwork Inc. (2022). Definition of the PREP method (Point → Reason → Example → Point) and concrete examples for business email and chat. Explicitly notes its limitations for emotional contexts and long-form composition. 【Reliability: Medium】(Industry media) ↩︎
What is the PREP method? Structuring clear explanations that land, with examples - makefri Editorial (2020). Before/after examples of PREP applied to business reports and proposals (including the case where a complaint takes priority and printing isn’t done). 【Reliability: Medium】(Industry media) ↩︎ ↩︎2
BLUF (communication) - Wikipedia. Explains “Bottom Line Up Front,” a U.S. military communication principle that puts the conclusion first. Adopted across the Navy, Marine Corps, Army, and Air Force; aims to accelerate busy recipients’ decision-making. 【Reliability: Medium】(Encyclopedia entry, secondary explanation of primary sources) ↩︎ ↩︎2 ↩︎3
The Effectiveness of Automated Writing Evaluation on Writing Quality: A Meta-Analysis - Zhai, N., & Ma, X. (2023). Journal of Educational Computing Research. A meta-analysis of 26 studies and 2,468 participants. Overall effect of AWE on writing quality: g = 0.861, p < 0.001 (large). Effects particularly pronounced for argumentative writing and amplified for L2 learners. 【Reliability: High】(Peer-reviewed meta-analysis) ↩︎
Comparing the quality of human and ChatGPT feedback of students’ writing - Steiss, J., Tate, T., Graham, S., et al. (2024). Learning and Instruction. Across five elements of formative feedback, human reviewers were rated higher than ChatGPT on four. The gap was modest, however, and ChatGPT was effective for surface-feature improvements (mechanics, grammar, tone). Limited value for content and higher-order thinking. 【Reliability: High】(Peer-reviewed academic paper) ↩︎ ↩︎2
30 generative AI prompt examples: ready-to-use templates by sales, HR, accounting, and marketing function - Sei San Sei Inc. (2026). 30 examples of AI prompts for business email and feedback writing. Provides function-specific templates for complaint-response emails, HR evaluation comments, internal notices, and more. 【Reliability: Medium】(Industry media) ↩︎