I tried an AI coding assistant. The output was impressive — coherent, confident, faster than anything I'd typed by hand. A few copy-pastes later, the feature was done. The pull request went up. Reviews passed.
Two sprints later, a regression surfaced in production.The generated code had made a reasonable assumption. Just not the right one for our system.
The tool performed to the best of its current ability. What we hadn't worked out yet was where to rely on it and where to remain skeptical.
The gap is not tool familiarity. AI is a fast but context-poor collaborator — it has no awareness of the system it is operating in, only what it is explicitly given. What was missing was a specific set of disciplines: knowing how to provide the right context, shape the right intent, verify what comes back, and recognize when the confidence of the response masks an incorrect assumption.
That capability does not come with the tool. It has to be built deliberately.The codebase has been around long enough to carry the fingerprints of engineers who are no longer here. Architectural decisions made in a different context. Conventions that vary by module, sometimes by file. Some parts are clean. Others are held together by institutional memory that lives in the heads of one or two people.
Comments explain the what, rarely the why. It works. Mostly.What an AI tool sees is the code. What it can't see is everything that shaped it.
The invisible context AI cannot see has to be made visible. Not as a one-time effort but as a fundamental shift in how the codebase is maintained going forward — so that the reasoning behind decisions, the boundaries between modules, and the intent behind features are no longer locked in people's heads or lost when they leave.
AI assistance has quietly entered the daily workflow — differently for every engineer. One uses it for everything. Another only for boilerplate. A third doesn't trust it at all. No shared understanding of where it fits, what it should produce, or how output gets verified.
The process looks the same on the surface. Underneath, it has become harder to see what is actually happening.Nobody decided to adopt AI inconsistently. It simply filled whatever space each engineer left for it.
AI left to find its own place in a team's workflow will find one. That is not integration — it is drift. The difference rarely shows up in daily output. It shows up when something goes wrong and the team discovers that nobody owns the decision that caused it.
Integration requires deliberate decisions about where AI belongs, what it produces at each point, and where human judgment remains non-negotiable.We've started asking questions. Not in a crisis — just steadily, in the background. How much of what ships now has AI in it? Are our engineers still developing judgment, or outsourcing it?
Nobody is panicking. But nobody has clear answers either.The questions are new enough that we're not certain we're even asking the right ones yet.
Most organizations are encountering these questions for the first time. There is no inherited playbook for where AI should and shouldn't be trusted, what it can and cannot be given access to, or how responsibility is understood when AI is involved in what ships.
These boundaries need to be drawn deliberately. Because if they aren't, events will draw them instead.These scenarios are not exceptional. They are the ordinary conditions under which most engineering organizations are meeting AI right now. The following may help put some method into the madness.
A practical framework built around what the exchanges above reveal. If this resonates with where your organization is — we would like to have that conversation.