← Blog

The Last Mile of Expertise

March 2026 · Essay

There is a moment in every doctor's career when they stop consulting the textbook and start consulting their gut. The textbook says the patient presents with symptom A, symptom B, and symptom C, which indicates diagnosis X. The gut says something else is going on. The experienced doctor orders one more test. The test comes back positive for diagnosis Y, which the textbook would not have suggested.

That gap — between what the textbook covers and what the experienced practitioner knows — is the last mile of expertise. It is the most valuable knowledge in any field. It is also the hardest to transmit.

Where Knowledge Goes to Die

Every organization has a person everyone goes to. Not the most senior person, not the person with the best title, but the person who has been in the trenches long enough to have seen everything. The one who says “we tried that in 2019 and here's why it didn't work.” The one who can glance at a production dashboard and say “that spike happens every third Thursday, it's the batch job, ignore it.”

When that person leaves, their knowledge leaves with them. Not the documented knowledge — the wiki pages, the runbooks, the architecture diagrams. Those stay. What leaves is the judgment. The rules of thumb. The “I would not do it that way” instinct that took ten years to develop. The shortcuts that are not in any playbook because nobody thought to write them down.

We have a word for this in knowledge management: “knowledge attrition.” But the word is too gentle. It implies gradual erosion. What actually happens is a cliff. The person walks out. The knowledge is gone. The organization discovers, through painful trial and error, all the things that person used to catch before they became problems.

Why Documentation Fails

The standard response to knowledge attrition is documentation. Write it down. Create a wiki. Build a runbook. The problem is that documentation captures procedures, not judgment.

A runbook says: “If the error rate exceeds 5%, page the on-call engineer.” The expert knows: “If the error rate exceeds 5% on a Monday morning, it's usually the weekend deployment catching up, wait 20 minutes. If it exceeds 5% on a Thursday afternoon, something is genuinely broken, page immediately.”

The runbook is correct. The expert is more correct. The difference is context — the ability to interpret the same data differently depending on surrounding conditions. This contextual judgment is what makes an expert an expert. And it is precisely what traditional documentation cannot capture, because documentation is context-free by design.

Constructs are not context-free. A construct can say: “If the error rate spikes, consider the day and time. Monday mornings after a weekend deployment often show elevated error rates for 20 minutes as caches warm. Thursday afternoons are genuine incidents until proven otherwise.” A construct can encode the conditional, context-dependent reasoning that a runbook strips away.

The Knowledge That Has No Name

The deepest expertise is not even recognized as knowledge by the person who holds it. It is so thoroughly internalized that it feels like common sense.

A senior database administrator does not think of herself as knowing something special when she looks at a query plan and immediately spots the missing index. To her, it is obvious. The sequential scan on a table with 10 million rows is right there in the plan. How could you miss it? She does not realize that a junior DBA would look at the same plan and see a wall of numbers.

This is the most dangerous form of knowledge loss, because the expert does not know she has it, so she cannot be asked to document it, and when she leaves, nobody knows what questions to ask the replacement. The knowledge has no name. It was never a skill that was taught. It was a perception that developed through years of exposure.

Writing a construct for this kind of knowledge requires a different approach than writing documentation. You are not asking the expert “what do you do?” You are asking “what do you see?” Not the steps, but the signals. Not the procedure, but the pattern recognition. A construct for database performance review does not say “check the query plan.” It says “look for sequential scans on tables over 100,000 rows, nested loop joins where one side has no index, and sort operations that spill to disk.” It names the specific things the expert sees instantly and the novice overlooks.

The Transfer Problem

Traditionally, last-mile expertise was transferred through apprenticeship. You sat next to the expert for years. You watched. You absorbed. You eventually developed your own version of their intuition. This worked, but it did not scale. One expert could train maybe three or four apprentices over a career.

Books scaled the transfer of procedural knowledge. You could learn the steps without the mentor. But books could not transfer judgment. You could read about medicine but you could not develop clinical intuition from a textbook. The last mile remained inaccessible without direct mentorship.

Constructs sit at an interesting point in this progression. They are not apprenticeship — you cannot sit next to a construct and absorb its instincts over years. But they are more than books — they can express conditional reasoning, situational awareness, and domain-specific heuristics that books flatten into general principles.

A construct says: “When reviewing a pull request that touches authentication code, apply paranoid scrutiny. Assume the author made a mistake. Check every comparison for timing attacks. Check every token for sufficient entropy. Check every session for proper invalidation. Authentication code is different from other code. The cost of a bug is not a broken feature but a compromised system.”

That paragraph is not a procedure. It is a stance — a way of approaching a category of work. It comes from someone who has seen authentication bugs in production and carries the scar tissue. No textbook teaches that paragraph. No documentation system produces it. It exists because someone who lived through the consequences decided to encode their hard-won vigilance into a document that an AI agent could follow.

What We Stand to Gain

The world is full of last-mile expertise that is disappearing. Every day, experienced professionals retire and take their judgment with them. Not their procedures — those are in the manuals. Their judgment. The sense of when the manual is wrong. The instinct for which rule of thumb applies in which situation. The pattern recognition that took decades to develop.

Constructs are not a perfect vessel for this knowledge. They cannot capture everything. They cannot replace apprenticeship entirely. But they can capture more than any previous medium. And unlike an apprenticeship, a construct can be forked ten thousand times. The retired surgeon's triage instincts do not train three residents. They train every emergency room AI in the country.

The last mile of expertise has always been the most valuable and the most fragile. Constructs do not eliminate the fragility. But for the first time, they make it possible to attempt the transfer at scale.

Related: Why Your Best Engineer Can't Write a Construct examines why expertise resists articulation. The Construct Economy, 2030 imagines what happens when this transfer scales globally.