Skip to main content

Documentation is Dead. Long Live Documentation.

I’m going to say something that will make every engineering manager uncomfortable: Stop asking your team to write documentation.

Not because documentation doesn’t matter. It matters more than ever. But because asking humans to document their work after they’ve done it is a process that has failed consistently for thirty years, and no amount of “definition of done” checklists or documentation sprints is going to fix it.

The people who know the most write the least. The docs that get written are stale within weeks. And the knowledge that matters most — the decisions, the gotchas, the “why” behind the code — rarely makes it into a document because it’s not the kind of thing you sit down and write.

The Documentation Death Spiral

I’ve watched this cycle play out on every team I’ve been part of:

Week 1: “We need to document this.” Everyone agrees. Someone creates a Confluence space.

Week 4: A few pages exist. They’re pretty good. Written by the one person who cares about docs.

Week 12: The pages are getting stale. A new service was added, but nobody updated the architecture doc. The database schema changed but the data model doc still shows the old structure.

Week 24: Engineers actively avoid the docs because they’ve been burned by outdated information. “Don’t trust the wiki” becomes team wisdom. The one person who maintained it left or burned out or just stopped caring.

Week 52: Someone suggests a documentation sprint.

I’ve seen this exact pattern at four companies. The timeline varies. The outcome doesn’t.

The Inconvenient Truth About Documentation

Here’s what nobody wants to admit: the most valuable knowledge is generated during work, not after it.

When an engineer decides to use cursor-based pagination instead of offset-based — that’s a decision being made, with context and rationale, right now, in the middle of implementation. If you wait until after the sprint to document it, the rationale is fuzzy. The alternatives considered are forgotten. The specific constraint that drove the decision (“our table will have 10M+ rows and offset pagination degrades”) is lost.

When an engineer discovers that a module has a hidden coupling to another service — that’s an insight. It happened during debugging, in the heat of the moment, with full context. Ask them to write it up in a doc later and you get “Module A depends on Module B” — technically correct but missing all the useful context about how and why and what to watch out for.

The knowledge is richest at the moment it’s created. Every hour after that, it decays.

What If Documentation Just… Happened?

I started thinking about this differently when I realized that every meaningful coding session naturally produces documentation-worthy knowledge. The engineer doesn’t sit down to document — they explain their reasoning to the AI, make decisions, discover patterns, fix bugs. The knowledge is right there in the conversation.

What if the system captured it at that moment? Not the full conversation — that’s noise. But the distilled knowledge: the decision with its rationale, the pattern with its context, the error with its root cause and fix.

Not as a wiki page someone has to maintain. As structured, typed, searchable knowledge that stays current because it’s generated from the work itself.

An engineer explains why the billing service has its own database. That’s captured as a decision with rationale, scope, and history. Six months later, when a new engineer asks, “Why is billing separated?” — the answer exists. Not in a Confluence page someone forgot to update, but as a knowledge item linked to the conversation where the decision was made.

Three engineers independently use the same retry pattern. The system captures the convention. New engineers see the established pattern from their first session — without anyone writing a style guide.

Someone discovers that a module has a subtle memory leak under specific conditions. The error pattern is captured with symptoms, root cause, and fix. Next time someone sees similar symptoms, the answer is already there.

The Compound Documentation Effect

Traditional documentation is a snapshot. Someone writes it at a point in time, and it begins decaying immediately.

Knowledge captured as a byproduct of work is a stream. Every session potentially adds to it. After six months, a team has captured hundreds of decisions, patterns, and insights — without anyone writing a single doc.

A new engineer joining the team has access to all of it from their first session. Not as a reading list of stale wiki pages, but as active context that the AI uses to inform every interaction.

This is what documentation should have been all along. Not a separate activity competing for time with shipping features. A natural side effect of the work itself.

The Hard Part

This only works if the extraction is good. Capture too much and you get noise. Capture the wrong things and you pollute the system.

The key insight I keep coming back to: the entity doing the work is the best judge of what’s worth capturing. A separate system processing transcripts doesn’t have the context. The AI that participated in the conversation — that made the decision, that explored the alternatives, that understands the constraints — knows exactly what’s worth remembering.

Documentation is dead as a deliberate activity. As a side effect of working with an intelligent system? It’s never been more alive.



from DevOps.com https://ift.tt/qCPVWbj

Comments

Popular posts from this blog

Claude Code’s Ultraplan Bridges the Gap Between Planning and Execution

Planning a complex code change is hard enough. Reviewing it in a terminal window shouldn’t make it harder. Anthropic is addressing that friction with a new capability called Ultraplan, currently in research preview as part of Claude Code. The feature moves the planning phase of a coding task from your local terminal to the cloud — and gives developers a richer environment to review, revise, and approve a plan before a single line of code changes. It’s a small workflow shift with real practical value, especially for teams working on large-scale migrations, service refactoring, or anything that requires careful coordination before execution begins. How it Works Ultraplan connects Claude Code’s command-line interface (CLI) to a cloud-based session running in plan mode. When a developer triggers it — either by running /ultraplan followed by a prompt, typing the word “ultraplan” anywhere in a standard prompt, or choosing to refine an existing local plan in the cloud — Claude picks u...

Claude Code Can Now Run Your Desktop

For most of its short life, Claude has lived inside a chat window. You type, it responds. That model is changing fast. Anthropic recently expanded Claude Code and Claude Cowork with a new computer use capability that lets the AI directly control your Mac or Windows desktop — clicking, typing, opening applications, navigating browsers, and completing workflows on your behalf. It’s available now as a research preview for Pro and Max subscribers. The short version: Claude can now do things at your desk while you’re somewhere else. How it Actually Works Claude doesn’t reach for the mouse first. It prioritizes existing connectors to services like Slack or Google Calendar. When no connector is available, it steps up to browser control. Only when those options don’t apply does it take direct control of the desktop — navigating through UI elements the way a human would. Claude always requests permission before accessing any new application, and users can halt operations at any point. T...

Google’s Scion Gives Developers a Smarter Way to Run AI Agents in Parallel

Running multiple AI agents on the same project sounds straightforward — until they start stepping on each other. Different agents accessing the same files, sharing credentials, or colliding on the same codebase can quickly turn a promising setup into a coordination nightmare. That’s the problem Google set out to solve with Scion. Scion is an experimental multi-agent orchestration testbed built to manage concurrent AI agents running in containers across local machines and remote clusters. Google recently open-sourced the project, giving developers a hands-on way to experiment with parallel agent execution across tasks like research, coding, auditing, and testing. Think of it as a control layer that keeps agents working together without getting in each other’s way. What Makes Scion Different Most agent frameworks treat AI as a library or prompt-chaining script that runs directly in your environment. Scion takes a different approach — it treats agents as system processes, wrapping ...