Five Layers of Context IT Engineers Should Recognize: Beyond Technical Boundaries
This article was generated by AI. The accuracy of the content is not guaranteed, and we accept no responsibility for any damages resulting from use of this article. By continuing to read, you agree to the Terms of Use.
- Target audience: Mid-career engineers searching for differentiation beyond technical skill, and junior-to-senior engineers rethinking their career direction
- Prerequisites: General experience in software development
- Reading time: Full read about 30 minutes / key points about 12 minutes
Overview
“Just hand me the spec and I can implement anything.” That sentence sounds confident, but it is more often a warning sign than a sign of strength. Even with strong technical skill, whether a project succeeds or a proposal lands frequently comes down to one thing: how far out the engineer can perceive the surrounding context.
The numbers back this up. Pendo’s 2019 study, which analyzed three months of usage logs across 615 companies, found that 80% of software features are “rarely or never used” (Frequent 12%, Moderate 8%, Rare 56%, Never 24%)1. PMI’s 2014 global survey identified inaccurate requirements gathering as the top contributor to project failure at 37%2. A 2020 peer-reviewed paper in PLOS ONE by Iqbal et al., drawing on industry surveys across 12 companies, reported that 48% of all errors across the software development lifecycle originate in requirements engineering (RE)3.
These are independent studies, but they all point to the same structural truth: the largest losses come from gaps in awareness before any code is written — who uses it, how, why we are building it, and what constraints apply. Specs do not capture all of this. Who uses the system, in what workflow, on what device. How a feature ties into revenue. Who actually has authority to approve a proposal. The political history between your team and the next one over. What regulations and industry standards apply. Skip these and you produce a steady stream of “built to spec but unused,” “shipped, then management reams us out,” “logically right but never adopted internally,” and “shipped a regulatory violation.”
This article organizes the contexts an IT engineer should recognize into five layers — technical, user, business, organizational, and market/societal — and shows what goes wrong when each layer is ignored, drawing on academic research, industry studies, and public references. The arrangement of five layers is the framing this article proposes; the individual concepts that make up each layer all rest on established research and practice. None of this is a trendy new concept. Developer situational awareness in software engineering has been studied for years through models such as SW-Context4; domain-driven design5 and Conway’s Law6 are structural debates that span half a century.
The five layers of context an IT engineer should recognize
flowchart TB
A[Market & Society<br>Regulation / Competition / Ethics / Industry]
B[Organization<br>Decision-making / Culture / Power dynamics]
C[Business<br>Revenue model / KPIs / Customer contracts]
D[User<br>Usage context / Non-functional requirements / Workflows]
E[Technical<br>Code / Dependencies / History / Design decisions]
A --> B --> C --> D --> E
The outer layers are more abstract and broader; the inner layers more concrete and immediate. Most engineers start at the technical layer and gradually expand outward. The SW-Context model proposed by L. F. D’Avila et al.4 structures the information, premises, and constraints that lead to a design decision so that later developers can accurately understand “the current state.” The fact that research into supporting situational awareness predates this work shows that this is a timeless, universal problem.
Layer 1: Technical context (code / dependencies / history / design decisions)
The innermost layer, and the one engineers are most comfortable with. Codebase structure, why a particular framework was chosen, the history of past incidents, version constraints in dependencies, ADRs (Architecture Decision Records), test strategy — the “context of implementation and design.”
Evidence: more than half of development time goes to “reading and understanding”
Xia et al.’s 2018 large-scale field study, published in IEEE Transactions on Software Engineering, collected 3,148 hours of work data from 78 professional developers and demonstrated that developers spend an average of 58% of their time on program comprehension activities7. Minelli et al.’s 2015 study analyzed roughly 5 million IDE events from 18 developers and showed that “time spent reading and understanding source code” dominates over editing or navigation8.
In other words, much of the time engineers spend “doing the work” is actually time spent reading the technical context. When the documentation that supports that reading is thin, developers fall back on memory and ad-hoc interviews. The 2006 ICSE paper by LaToza, Venolia, and DeLine reported that developers expend significant effort on code exploration and interrupting teammates just to maintain a mental model, that documentation of design rationale is inadequate, and that much of the relevant knowledge lives only in individual memory9.
Evidence: design decisions “evaporate” if they are not written down
Jansen and Bosch’s 2005 WICSA paper introduced the concept of “knowledge vaporization” — design decisions get implicitly baked into the architecture without first-class representation, and the underlying knowledge is lost10. Falessi et al.’s 2013 ACM TOSEM paper used controlled experiments at two sites to quantitatively confirm that the absence of design rationale information makes activities like impact analysis and large-scale redesign substantially harder11.
The industry’s response to this problem is the ADR (Architecture Decision Record). Tyree and Akerman argued in their 2005 IEEE Software paper that explicitly documenting major architectural decisions makes the development process more structured and transparent12. Michael Nygard formalized a practical ADR template in 2011, observing that “the hardest thing to track during the life of a project is the motivation behind certain decisions” — when a newcomer encounters code lacking that context, they are stuck choosing between “blind acceptance (preserving outdated choices) and blind change (undermining the project’s value)”13.
What goes wrong if you miss this layer:
- You re-implement a feature that was previously deleted because “it would be useful,” walking right back into the reason it was removed (a security vulnerability, a regulatory violation, a customer complaint).
- You miss an existing caching layer and add a new one, producing consistency bugs from double-caching.
- You “improve” a function without considering concurrency, transaction boundaries, or lock granularity — locally correct, but it breaks the moment it integrates with the rest of the system.
The gap between juniors and seniors shows up exactly here. Seniors have a habit of asking “why does this code look the way it does today?” and the research data on time allocation reflects the behavior that habit produces.
Layer 2: User context (usage context / non-functional requirements / workflows)
“Who uses it, in what situation, and how” — the part that never makes it into the spec. When the batch jobs run, the traffic ratio between off-peak and peak periods, what device end users are on outside of business hours, the operational shortcuts where people skip screens, the rollback flow that assumes data entry mistakes.
Evidence: 80% of features go unused; missed requirements drive failure
The Pendo 2019 study cited at the outset showed, from real usage logs, that “80% of features are rarely or never used” and that “12% of features generate 80% of the use”1. The Standish Group’s CHAOS Report 2015, drawing on more than 25,000 projects, reported a 36% success rate against traditional metrics (time, budget, goal attainment), with 45% Challenged and 19% Failed14. PMI’s 2014 study put requirements-driven failure at 37%2; Iqbal et al.’s 2020 peer-reviewed paper put RE-driven errors at 48% of all SDLC errors3. In Japan, the IPA’s “Software Development Analytical Data 2022” analyzed data from 5,546 projects and reported that both reliability and productivity have been trending downward in recent years15.
Evidence: user context can only be acquired by observation
“You cannot understand what users are doing just by asking them” is a classical proposition in HCI. Hugh Beyer and Karen Holtzblatt’s Contextual Design (1998) proposed Contextual Inquiry as a core method, arguing that “what users actually do, the motivations behind it, latent needs, and core values can only be understood through direct observation in the natural work context”16.
The taxonomy of non-functional requirements is also not something that emerges on its own — it has to be defined explicitly. ISO/IEC 25010:2023 classifies product quality into nine characteristics: Functional Suitability, Performance Efficiency, Compatibility, Interaction Capability, Reliability, Security, Maintainability, Flexibility, and Safety (Usability from the 2011 version was replaced by Interaction Capability, Portability by Flexibility, and Safety was added)17. The fact that an official taxonomy of “non-functional requirements” exists at all is also a reminder that these are exactly the dimensions that tend to be overlooked.
What goes wrong if you miss this layer:
- You design assuming “users log in five times a day,” only to find it is an operations tool that polls every minute, and your authentication path falls apart.
- You target 100 ms response time, but users send ten requests in parallel as part of normal operation, and the server collapses under load.
- You ignore the reality that operators step away mid-task to take a phone call, and your session-timeout logic does not match how the system is actually used.
None of these are knowable from a perfect spec alone. It is reasonable to assume that the “80% of features that go unused” in Pendo’s data contains many features born from precisely this kind of awareness gap.
Layer 3: Business context (revenue model / KPIs / customer contracts)
How the company makes money, which numbers leadership cares about, which features are contractually mandatory for the customer — the economic context you need before writing code.
Evidence: domain-driven design and IT-business alignment have been open problems for nearly half a century
Eric Evans’s Domain-Driven Design: Tackling Complexity in the Heart of Software (Addison-Wesley, 2003)5 argued that the heart of software development is the domain and its logic, and used concepts such as ubiquitous language and bounded contexts to systematize the importance of domain knowledge in engineers. Vaughn Vernon’s Implementing Domain-Driven Design (2013)18 translated those ideas into implementation-level practice.
But the problem is older than that. Henderson and Venkatraman’s 1993 Strategic Alignment Model (IBM Systems Journal)19 is a classic, with over 3,200 citations, arguing that alignment between business strategy and IT determines organizational performance. Coltman et al.’s 2015 review (Journal of Information Technology) noted that “strategic IT alignment, even 25 years after it was proposed, remains an unresolved structural problem”20. Empirical work by Kang, Hahn, and De showed that in outsourced ISD engagements, teams with domain knowledge outperform on both project efficiency and quality21.
Evidence: industry regulation is not something you get to plead ignorance of
If you are writing software for a particular industry, not knowing its official guidelines or applicable laws will break your design decisions at the root.
- Finance: PCI DSS v4.0.1 from the PCI Security Standards Council (published June 2024; v4.0 retired on December 31, 2024)22 is the primary source for security requirements that any system handling payment cards must meet. In Japan, the Financial Instruments and Exchange Act (Act No. 25 of 1948, available via the e-Gov Law Search)23 and the FISC (The Center for Financial Industry Information Systems) “Security Guidelines on Computer Systems for Financial Institutions,” 13th edition (published March 2025, reflecting the Economic Security Promotion Act, operational resilience, AI safety measures, and more)24 form the de facto standard.
- Healthcare: Japan’s Ministry of Health, Labour and Welfare “Guidelines for Safety Management of Medical Information Systems,” version 6.0 (May 2023)25 is the mandatory baseline for medical information system implementations in Japan.
What goes wrong if you miss this layer:
- Without understanding how feature releases hit revenue, you “tidy up” a top-revenue feature for cosmetic reasons and conversion drops after release.
- For a feature contracted under a 99.9% SLA, you slip in an “easy little change” that includes an unplanned restart.
- When improving the payments flow, you do not realize the company’s revenue depends on a three-phase structure (authorization hold → settlement → reconciliation), or that PCI DSS compliance is mandatory, and your improvement proposal makes no operational sense26.
“Anyone can implement; nobody understands the business” is a complaint that has been repeated since DDD, and it is a structural problem still unresolved more than two decades later.
Layer 4: Organizational context (decision-making / culture / power dynamics)
Who can approve what, the political history that led to the current architecture, the relationship between your team and the team next door, taboo topics to avoid, what behaviors the performance review system is actually rewarding.
Evidence: organizational structure dictates software structure (Conway’s Law)
In 1968, Melvin Conway argued in “How Do Committees Invent?” that “any organization that designs a system will inevitably produce a design whose structure is a copy of the organization’s communication structure”6. MacCormack, Baldwin, and Rusnak’s 2012 Research Policy paper compared commercial software with functionally equivalent open-source projects and showed statistically that products built by loosely coupled organizations are significantly more modular than those built by tightly coupled ones (an empirical confirmation of the Mirroring Hypothesis)27. Skelton and Pais’s Team Topologies (2019)28 inverts the relationship and systematizes the Inverse Conway Maneuver — designing the organization in order to obtain the desired architecture.
Evidence: incentive systems distort technical judgment
Steven Kerr’s 1975 Academy of Management Journal paper “On the Folly of Rewarding A, While Hoping for B”29 is a management classic showing, through many real-world examples, how reward systems diverge from an organization’s actual goals. Goodhart’s Law (1975) and Strathern’s 1997 formulation generalize this as “once a measure becomes a target, it ceases to be a good measure”30. The pattern of an engineer evaluated on release count getting punished for taking on a large refactor is fully explained by these theories.
Evidence: organizational culture maps directly to software performance
Ron Westrum’s 2004 paper in Quality and Safety in Health Care classified organizational cultures into three types — pathological, bureaucratic, and generative — distinguished by how information flows, how failure is handled, and how novelty is received31. Later DORA / DevOps research (Accelerate) adopted this typology as the basis for its performance-prediction model. Coplien and Harrison’s Organizational Patterns of Agile Software Development (2004)32 studied more than 100 real software organizations and extracted roughly 100 organizational patterns, describing in pattern-language form how power dynamics and communication paths determine project outcomes.
What goes wrong if you miss this layer:
- You make a technically flawless proposal, only for it to be torn apart because the receiving team was burned badly by that exact technology in the past.
- Your individual performance is being measured on release count, but you take on a large refactor and your evaluation drops.
- You ignore that “this team’s review culture is to read carefully in pairs” and flood reviewers with many small PRs, exhausting them.
- You miss that cross-functional decisions are effectively made over informal lunches rather than in formal meetings, push for a vote in the meeting, and trigger backlash.
The stronger the technical skill, the easier it is to fall into the pattern of “winning on technical merit while losing in the organization.”
Layer 5: Market/societal context (regulation / competition / ethics / industry trends)
GDPR, Japan’s Personal Information Protection Act, industry-specific regulations, the moves of competing products, open-source license implications of technical choices, and the social lines that simply must not be crossed.
Evidence: the cost of regulatory violation is large and real
GDPR (Regulation (EU) 2016/679)33 permits sanctions of up to EUR 20 million or 4% of global annual turnover, whichever is higher. The GDPR Enforcement Tracker run by CMS.Law34 consolidates real-world enforcement actions (Meta at EUR 1.2 billion, Uber at EUR 290 million, and so on). In Japan, the Personal Information Protection Commission regularly updates the Personal Information Protection Act and its accompanying guidelines35.
The IBM Security and Ponemon Institute “Cost of a Data Breach Report 2025”36 reports a global average breach cost of USD 4.44 million, USD 10.22 million in the U.S., and USD 7.42 million for healthcare. Designs that put personal information into logs, or that defer encryption, translate directly into these costs.
Evidence: open-source license standards and compatibility frameworks are publicly available
OSS license compatibility cannot be judged by feel, but the SPDX License List (Linux Foundation, v3.28.0, released February 20, 2026)37 and the OSI Approved Licenses38 provide official frameworks. Pulling OSS in without consulting them puts the distributability of your own product at risk.
What goes wrong if you miss this layer:
- You design logs that include personal information and bake a regulatory violation directly into the system.
- You pick a design that is incompatible with a competitor’s published API specification and pay for it later.
- You bring in OSS without checking license compatibility, and your product’s distribution gets restricted.
- You do not know the industry-standard security baseline (for finance, for example, the FISC security guidelines24), and a pre-release audit forces a large-scale rework.
This layer is easy to dismiss as “not my specialty,” but the responsibility for judgments hard-coded into the codebase ultimately lands on the implementer.
What the five layers reveal about the real difficulty of engineering work
What stands out once you lay out the five layers is this: everything outside Layer 1 (technical) is invisible in the spec. A spec compresses business, organizational, and market context into “requirements” — why those requirements exist, how they will be operated, who approved what — none of that can be reconstructed by reading the document alone. That is precisely why requirements-driven failures account for 48% of SDLC errors3 and 80% of features go unused1.
Strong engineers, the moment they receive a spec, immediately try to reconstruct Layers 2 through 5 behind it. “Whose work is this feature meant to make easier?” “What number does leadership want to move?” “What will the other team say if we build this?” “What is the regulatory exposure?” Engineers with strong technical skills become even more powerful when they can connect those questions to technical decisions. That said, this is about individual situational awareness; it does not absolve the organization of its responsibility to supply context (see the column below).
Conversely, staying inside Layer 1 produces a recognizable cycle:
- Implement the spec correctly, technically.
- Mismatches with Layers 2–5 trigger criticism from operations, leadership, neighboring teams, or regulators.
- Defend with “I built it to spec” and refuse to take ownership.
- Fix-ups accumulate, but no underlying lesson is learned.
- The same thing happens on the next project.
The stronger the technical skills, the easier it is to fall into this cycle: you can write code fast, and you can use technical correctness as a shield.
Column: relationship to “Context Engineering”
The term Context Engineering — designing the context handed to an AI agent — has become more common recently3940. In the framework of this article, that is essentially the act of writing out part of the five-layer context in a form the AI can consume. It is not a newly discovered domain of awareness; it is the externalization, into CLAUDE.md / AGENTS.md / prompt templates, of what strong engineers were already carrying in their heads, now being given a name and a methodology. AI or no AI, the foundational question is still whether the humans involved actually perceive the five layers.
Individual awareness is not an obligation to compensate for organizational failure
Read this far, and the claim “strong engineers perceive the five layers” might be misread as “the ideal is for the individual to compensate for everything.” That is not the intent of this article.
The ability to perceive the five layers is a career asset for the individual, not a tool to justify individuals quietly absorbing context the organization should be providing. If anything, an organization where many of the five layers depend on individuals tacitly compensating has a separate problem in its context-supply capability. Individual awareness and organizational supply are complementary, and neither alone is sustainable. The organizational side of this responsibility is something I plan to address in detail in a separate article.
Summary
- The contexts an IT engineer should recognize organize cleanly into five layers: technical, user, business, organizational, and market/societal.
- Everything outside Layer 1 is invisible in the spec; you can only obtain it through interviews, observation, understanding internal politics, and regulatory research.
- Failure to perceive each layer mass-produces “works but unused,” “works but business-irrelevant,” “works but organizationally rejected,” and “works but in violation of regulation.” Pendo reports 80% of features go unused1; PMI puts requirements-driven failure at 37%2; Iqbal et al. put RE-driven errors at 48% of SDLC errors3.
- 58% of developer time goes to program comprehension7, and design knowledge “evaporates” if it is not documented10. Organizational structure dictates software structure627, and incentive systems distort technical judgment2930.
- This is a timeless, universal structural pattern, supported by half a century of research and empirical data.
The practice itself is unglamorous. For your current project, write down one piece of “information that someone unfamiliar would struggle without” for each of the five layers. Any layer where you cannot write something is a hole in your awareness — that layer is currently being covered by someone else (or by no one, in which case it is simply sitting there as risk).
Related articles
You may also be interested in these related posts:
- Beyond Loose-Coupling Supremacy: Understanding ‘Balancing Coupling in Software Design’ — context for design decisions in Layer 1 (technical context)
- Why Engineering Management Becomes a Punishment Game in Japan: Three Separations — the structural breakdown of Layer 4 (organizational context)
- Escaping Jack-of-All-Trades: The Path of Late Specialization — choosing to go deep into a domain after building broad awareness across the five layers
- I-Shaped, T-Shaped, π-Shaped: A Skill Matrix of Depth and Breadth — how breadth of context awareness relates to career shape
- Why Deep Specialists Will See Their Market Value Explode in the AI Era: Neurodiversity x Jagged Frontier Evidence — strategy for turning Layer 3 (domain knowledge) into a weapon
References
References are listed in the order of citation numbers used in the body.
Other references (not cited by number in the body)
- Context Engineering for LLM Development - Four Kinds of Knowledge — Takuya Kubo (2025-10-05). Organizes the knowledge that should be handed to an LLM into four categories: general operations, software engineering, practice, and domain. [Reliability: Medium]
2019 Feature Adoption Report — Pendo / Suja Thomas (2019). Analyzed three months of subscription usage logs across 615 companies. Reports that 80% of features are rarely or never used and 12% generate 80% of usage. [Reliability: Medium] ↩︎ ↩︎2 ↩︎3 ↩︎4
Requirements Management: A Core Competency for Project and Program Success — Project Management Institute, Pulse of the Profession In-Depth Report (2014). A survey of more than 2,000 respondents identifying inaccurate requirements as the top contributor to failure at 37%. [Reliability: High] ↩︎ ↩︎2 ↩︎3
Requirements engineering issues causing software development outsourcing failure — Iqbal et al., PLOS ONE (2020-04-09). Frames RE-related errors as 48% of all SDLC errors. [Reliability: High] ↩︎ ↩︎2 ↩︎3 ↩︎4
SW-Context: a model to improve developers’ situational awareness — L. F. D’Avila et al., IET Software (2020). A model that structures the context information leading to design decisions to enhance developer situational awareness. [Reliability: High] ↩︎ ↩︎2
Domain-Driven Design: Tackling Complexity in the Heart of Software — Eric Evans, Addison-Wesley (2003). ISBN: 978-0-321-12521-7. The classic that systematized the importance of domain knowledge through ubiquitous language and bounded contexts. [Reliability: High] ↩︎ ↩︎2
How Do Committees Invent? — Melvin E. Conway, Datamation (April 1968). The original source of Conway’s Law, arguing the homomorphism between organizational communication structure and system design. Available on the author’s official site. [Reliability: High] ↩︎ ↩︎2 ↩︎3
Measuring Program Comprehension: A Large-Scale Field Study with Professionals — Xia, Bao, Lo, Xing, Hassan, Li, IEEE Transactions on Software Engineering, vol. 44, no. 10 (2018). Collected 3,148 hours of data from 78 developers, demonstrating an average of 58% of time spent on program comprehension. [Reliability: High] ↩︎ ↩︎2
I Know What You Did Last Summer – An Investigation of How Developers Spend Their Time — Minelli, Mocci, Lanza, ICPC (2015). Analysis of 18 developers and roughly 5 million IDE events showing that code reading dominates over navigation and editing. [Reliability: High] ↩︎
Maintaining Mental Models: A Study of Developer Work Habits — LaToza, Venolia, DeLine, ICSE (2006). Reports that developers heavily rely on code exploration and team interruptions to recover tacit knowledge, and that design rationale documentation is inadequate. [Reliability: High] ↩︎
Software Architecture as a Set of Architectural Design Decisions — Jansen, Bosch, WICSA (2005). Introduces the concept of “knowledge vaporization,” where design decisions become implicit and disappear. [Reliability: High] ↩︎ ↩︎2
The Value of Design Rationale Information — Falessi, Briand, Cantone, Capilla, Kruchten, ACM TOSEM, vol. 22, no. 3 (2013). Empirically validates the value of documenting design rationale through controlled experiments at two sites. [Reliability: High] ↩︎
Architecture Decisions: Demystifying Architecture — Tyree, Akerman, IEEE Software, vol. 22, no. 2 (2005). Argues for the importance of explicitly documenting architectural decisions. [Reliability: High] ↩︎
Documenting Architecture Decisions — Michael Nygard, Cognitect blog (2011-11-15). The original proposal of a practical ADR template. [Reliability: Medium-High] ↩︎
CHAOS Report 2015 — The Standish Group International (2015). Analysis of more than 25,000 projects: 36% successful, 45% Challenged, 19% Failed under Traditional Resolution. [Reliability: Medium-High] ↩︎
Software Development Analytical Data 2022 — Information-technology Promotion Agency (IPA), Digital Foundation Center (2022-09-26). Quantitative analysis of 5,546 enterprise projects in Japan; both reliability and productivity have been declining recently. [Reliability: High] ↩︎
Contextual Design: Defining Customer-Centered Systems — Hugh Beyer, Karen Holtzblatt, Morgan Kaufmann (1998). ISBN: 978-1-55860-411-7. The classic standard text on HCI built around Contextual Inquiry. [Reliability: High] ↩︎
ISO/IEC 25010:2023 — Systems and software engineering — SQuaRE — Product quality model — ISO/IEC (2023). The official international standard classifying product quality into nine characteristics. [Reliability: High] ↩︎
Implementing Domain-Driven Design — Vaughn Vernon, Addison-Wesley (2013). ISBN: 978-0-321-83457-7. The standard reference translating DDD into implementation-level practice. [Reliability: High] ↩︎
Strategic Alignment: Leveraging Information Technology for Transforming Organizations — Henderson, Venkatraman, IBM Systems Journal, vol. 32, no. 1 (1993). Classic on alignment between business strategy and IT, with over 3,200 citations. [Reliability: High] ↩︎
Strategic IT alignment: twenty-five years on — Coltman, Tallon, Sharma, Queiroz, Journal of Information Technology, 30(2) (2015). Frames IT alignment as an unresolved structural issue 25 years on. DOI: 10.1057/jit.2014.35. [Reliability: High] ↩︎
Learning Effects of Domain and Technology Knowledge in Outsourced Information Systems Development — Kang, Hahn, De (SSRN). Empirically demonstrates the advantage of teams with domain knowledge in outsourced ISD engagements. [Reliability: Medium-High] ↩︎
Payment Card Industry Data Security Standard v4.0.1 — PCI Security Standards Council (2024-06-11). Mandatory security requirements for systems handling payment cards. v4.0 was retired on 2024-12-31; v4.0.1 is the only currently valid version. [Reliability: High] ↩︎
Financial Instruments and Exchange Act (Act No. 25 of 1948) — Government of Japan / Jurisdiction: Financial Services Agency / Source: e-Gov Law Search (Digital Agency). Legal requirements for financial instruments business operators. [Reliability: High] ↩︎
Security Guidelines on Computer Systems for Financial Institutions, 13th Edition — The Center for Financial Industry Information Systems (FISC) (2025-03-21). The de facto standard for IT systems at Japanese financial institutions. Reflects the Economic Security Promotion Act, operational resilience, AI safety measures, and more. [Reliability: High] ↩︎ ↩︎2
Guidelines for Safety Management of Medical Information Systems, Version 6.0 — Ministry of Health, Labour and Welfare (2023-05). The mandatory baseline for medical information system implementations in Japan. [Reliability: High] ↩︎
Why Domain Knowledge Becomes an Engineer’s Weapon — Masato Kawakami (2026-02-25). Argues for the scarcity value of domain knowledge using industry-specific business logic (finance, healthcare, logistics) as examples. [Reliability: Medium] ↩︎
Exploring the duality between product and organizational architectures: A test of the “mirroring” hypothesis — MacCormack, Baldwin, Rusnak, Research Policy, vol. 41, no. 8 (2012). Empirical study confirming Conway’s Law: products from loosely coupled organizations are statistically more modular than those from tightly coupled ones. [Reliability: High] ↩︎ ↩︎2
Team Topologies: Organizing Business and Technology Teams for Fast Flow — Matthew Skelton, Manuel Pais, IT Revolution Press (2019; 2nd ed. 2025). Four team types, three interaction modes, and the Inverse Conway Maneuver. [Reliability: High] ↩︎
On the Folly of Rewarding A, While Hoping for B — Steven Kerr, Academy of Management Journal, vol. 18, no. 4 (1975). Classic paper on how reward systems diverge from real organizational goals. [Reliability: High] ↩︎ ↩︎2
Goodhart’s Law / Strathern, M. (1997) “Improving ratings: audit in the British University system” European Review 5(3): 305–321. Formulates the principle that “once a measure becomes a target, it ceases to be a good measure.” [Reliability: High] ↩︎ ↩︎2
A typology of organisational cultures — Ron Westrum, Quality and Safety in Health Care, vol. 13, Suppl 2 (2004). DOI: 10.1136/qshc.2003.009522. Three types — pathological, bureaucratic, generative. The basis of the later DORA research. [Reliability: High] ↩︎
Organizational Patterns of Agile Software Development — James O. Coplien, Neil B. Harrison, Prentice Hall (2004). ISBN: 978-0-13-146740-8. Roughly 100 organizational patterns extracted from a study of more than 100 organizations. [Reliability: High] ↩︎
Regulation (EU) 2016/679 (GDPR) — European Parliament and Council, EUR-Lex (2016-04-27). The primary EU data-protection regulation. Sanctions of up to EUR 20 million or 4% of global annual turnover. [Reliability: High] ↩︎
GDPR Enforcement Tracker — CMS.Law (continuously updated). A database of fines issued by EU national data protection authorities, including Meta at EUR 1.2 billion and Uber at EUR 290 million. [Reliability: Medium-High] ↩︎
Personal Information Protection Act and guidelines — Personal Information Protection Commission (continuously updated). Official Japanese statutory text and guidelines. [Reliability: High] ↩︎
Cost of a Data Breach Report 2025 — IBM Security / Ponemon Institute (2025). Global average breach cost USD 4.44 million; U.S. USD 10.22 million; healthcare USD 7.42 million. [Reliability: Medium-High] ↩︎
SPDX License List — Linux Foundation / SPDX Project, v3.28.0 (2026-02-20). De facto industry standard providing identifiers and full text for over 600 OSS licenses. [Reliability: High] ↩︎
OSI Approved Licenses — Open Source Initiative (continuously updated). The list of OSI-approved licenses. [Reliability: High] ↩︎
Effective context engineering for AI agents — Anthropic Engineering (2025-09-29). Defines Context Engineering, its components, context rot, and the importance of a minimal high-signal token set. [Reliability: High] ↩︎
Context Engineering for Coding Agents — Birgitta Böckeler, martinfowler.com (2026-02-05). Four categories of context for coding agents and a warning about the “illusion of certainty.” [Reliability: Medium-High] ↩︎