AI didn't create the problem of lost company knowledge, but it makes the cost impossible to ignore. Every AI task starts from zero - no memory of the last coordination meeting, no awareness of the political constraint that ruled out Option B. Context engineering is a management discipline, and firms need to begin that work before the tools become critical infrastructure.
Scenario: a senior engineer leaves mid-project. The reshuffled team reads every drawing, every report, every model. They're all good engineers, but they still don't know why the transfer beam at Level 4 is where it is, because that conversation happened between the structural lead and the architect in a corridor during a site visit eighteen months ago. It's not in the minutes. It's not in the model. It informed every decision that followed. Now imagine not a new engineer, but an AI tool. It has the same problem, but it can't even ask the question.
The loss of company knowledge isn't a problem that AI created, but AI will make it impossible to ignore. The tools are getting faster and more capable, but the failures are increasingly subtle - technically plausible, contextually wrong. The tool did what it was asked. It just didn't know what it wasn't being told.
Context is the commodity a professional organisation runs on - shared understanding of what is being built, why it matters, what has already been tried, and which decisions are settled. Human teams absorb this by osmosis. By being in the room. By half-hearing the client conversation and remembering why that detail failed on a previous project.
AI tools are never really "in the room". Every task starts from zero. They have no memory of the last coordination meeting, no awareness of the political constraint that ruled out Option B, no sense of which structural decision is a load-bearing principle and which is just how it ended up. Fred Brooks observed in 1975 that impactful design is the product of many people who have spent a long time negotiating about what it should do. The design, when it's finished, is the outcome of that negotiation. The residue lives in the shared memory of the team.
Software teams hit this wall first because they delegated execution to AI agents earliest. A ticket that made complete sense to a developer who had been in the room for six months was useless to an agent starting cold. Teams responded by writing context down explicitly - decision logs, PRDs, user stories, acceptance criteria. It works well at a small scale. We don't yet know how it will survive complexity.
A structural scheme evolves through three RIBA stages. By Stage 3, the primary structure reflects dozens of decisions - planning constraints, client programme requirements, contractor feedback, a coordination issue with the facade system that forced the column grid to shift. The slab depth, the ceiling height, the riser position - each decision reflects a coordination conversation that happened between disciplines and left no written trace. The drawings show the outcome. They don't show the path. That distinction matters. An AI tool asked to check the specification, draft a design report, or answer a query about the structural strategy is working from the drawings, not the path that got there.
Small firms have coherence for free. Five people, a handful of projects - the humans around the tool provide the context, and the tool executes well. This is why enthusiastic early adopters tend to be small practices. They're not wrong about their experience, but they have a specific size advantage. Past a certain headcount, coherence has to be produced deliberately. AI is an asymmetric multiplier - it makes good process faster and weak process more confidently wrong.
The solution isn't necessarily "write more things down". The discipline of producing legible knowledge is harder than it looks, and it's a cultural and management problem before it's a tooling problem. But something new is becoming possible. Agents that consume context need agents that produce it. Tools extracting decisions from meeting minutes, reasoning from technical query responses, the why-we-did-it from design reviews - building a written residue the organisation would never have produced on its own. The technology is early, but the direction is right.
The firms that will get the most from AI tools are the ones that already know their hardest problem is organisational coherence. That the knowledge walking out the door with a senior engineer is operational risk. That a drawing is not the same thing as understanding how a project became what it is.
AI didn't create this problem, but it made the cost visible. Context engineering is not really a technical discipline, it is for management - deciding which conversations matter, which decisions need residue, and how an organisation remembers why it did something after the people involved have moved on.
That work needs to begin before the AI tools become critical infrastructure.
If you would like to find out more about working effectively with AI, please do get in touch.