Introduction
Over the last eighteen months, every major researcher publishing on AI capability has pointed at the same shift. Aschenbrenner argues AGI by 2027 is strikingly plausible and that a few hundred people on earth have situational awareness about what is coming. Amodei, the CEO of Anthropic, puts seventy to eighty percent probability on a billion-dollar single-employee company existing by the end of this year. Sutskever left OpenAI and said the scaling era is over, that what compounds now is ideas and domain knowledge. They disagree on timelines and risk. They converge on the window: one to three years between AI capability and industry adoption. Anyone encoding inside that window is compounding. Anyone who starts after it closes faces an incumbent whose lead widens every month it runs.
BCG studied 1,250 companies deploying AI. Five percent captured almost all the value: 1.7x higher revenue growth, 3.6x stronger shareholder returns. The other ninety-five percent spent more than they earned back. The difference was not budget, talent, or technology. The five percent had encoded their expertise into systems that operated without the expert in the room. The ninety-five percent had bolted AI onto processes where the core knowledge had never left the founder's head.
Last August at a private founders dinner in Bali, forty people worth between five million and a hundred million dollars spent the entire evening asking the same question: who actually knows what they're doing with AI, and how do you tell? People who had been building for decades. And they could not cut through the noise.
This paper maps the architecture that separates the five percent from the ninety-five. The model is M = T × S × K × A × E × I × L: Truth, Situational Awareness, Knowledge, Architecture, Encoding, Infrastructure, Leverage. Seven variables, multiplicative. Each one is a dependency: remove any single one and the output of the entire system collapses, regardless of how strong the remaining six are. The model was built from two years on both sides of AI implementation: enterprise boardrooms and solo founders, agencies and SaaS companies, creator brands and consulting firms, content businesses and coaching programs. Thousands of conversations. Hundreds of deals and implementations. The same constraint in every one: the expert was the system, and the system could not survive without the expert in the room.
Table of Contents
Each chapter is designed to stand on its own, though the model builds cumulatively. For a PDF of the full paper, click here.
Seven variables, one formula. Why they are multiplicative, what happens when any single one equals zero, and the three scenes that revealed the constraint underneath every business I opened up. The architecture of the system in full.
Truth is structural, not moral. Every piece of content is either brand equity or brand debt, and AI just made cross-referencing contradictions a thirty-second operation. The Arup deepfake. The Leeway Framework. Why the model survives after exposure. Why truth is the interest rate on everything else in the system.
Air France 447 lost its instruments over the Atlantic and the pilots pulled back when they should have pushed forward. The same failure mode plays out in every industry that cannot read the gap between AI capability and adoption. The S-formula. The five adoption levels. The $285 billion repricing that started with a single text file in a GitHub repository. The noise flood and the four filters that cut through it.
Rick Rescorla predicted two attacks on the World Trade Center and saved 2,687 lives because his pattern library was built through forty years of closed feedback loops. The Port Authority had the same information and told people to stay at their desks. Knowledge is the signal source. The four sub-components of K, the 1,000-hour canyon, and why AI cannot replace what it takes years of loops to build.
Everybody tries to automate from A to Z. The five percent who succeed start from Z and reverse-engineer backward. How to map the load-bearing walls of any business, the six domains that repeat across every industry, the Z→A trace, and why BCG found that eighty percent of companies layer AI onto unchanged processes and get nothing from it.
How to get what is inside your head out of your head and into a system that carries it. The two channels: explicit rules and tacit judgment. Siemens and Chalmers University found that fully encoded expert knowledge improved AI output by 206%. The jagged frontier, the encoding flywheel, and why the first thousand hours of domain experience matter more than the model you run on.
Where the encoding lives once it has a home, what the feedback loops produce at scale, and what remains genuinely unpredictable. Infrastructure, leverage, and the closing argument for why the window described in this paper is still open and why waiting is the most expensive decision available.
A: Quick Start — 5 Days, 3 Hours
B–F: Chapter Diagnostics & AI Auditor Prompts
Full source list with evidence tiers.
This paper draws on peer-reviewed research (Levine, Klein, Endsley, Kosinski, Siemens/Chalmers), published analysis from Shannon, Wiener, Ashby, and Beer, contemporary work from Aschenbrenner, Amodei, Sutskever, and Luna, and two years of direct field observation across eight hundred-plus founder conversations. Where I report what I observed, I say so. Where I cite research, I cite it. The two are never blended. Full citations and source authority rankings are available in the References section.