Don't outsource your thinking

2026

Something is happening in our industry that nobody wants to say out loud.

Engineers are shipping more code than ever. Velocity is up. Cycles are tighter. Sprint demos look better than they ever have. And yet, when you ask people to walk you through what they actually built last week - really walk you through it, line by line, decision by decision - most of them can't.

Not because they're lazy. Because they didn't write it. They prompted it. And somewhere between the prompt and the merge, the thinking got skipped.

We use AI to write the architecture docs. We use AI to write the code. We use AI to review the code. We use AI to write the PR descriptions, the commit messages, the post-mortems, even the Slack update explaining why the migration slipped by a day.

At some point you have to ask: what part of the job is left for the human?

The atrophy nobody is measuring

There's an obvious productivity story being told everywhere. AI added X% throughput, Y% time saved, Z% more PRs merged per engineer. Executives love these numbers. They show up on earnings calls.

The numbers nobody puts on a slide are the ones that actually matter:

  • How many engineers can still describe how their own service works without opening Cursor?
  • How many can debug a production issue without asking a model to walk them through it?
  • How many can review a PR end to end and tell you what's wrong with it from first principles?
  • How many can answer a product question in a meeting without saying "let me check"?

Code ownership used to mean something. You shipped it. You knew it. You owned it. If something broke at 2 AM, you knew where to start because the system lived in your head.

Today, the system lives in a context window. And the context window doesn't get paged.

What gets lost when you skip the reading

Reading code is how you build a model of the system. Not the code you wrote - the code other people wrote. Junior engineers used to spend their first months reading. Reading other people's PRs. Reading the codebase top to bottom. Reading old threads to understand why a weird workaround exists.

That reading was never about typing speed. It was about absorbing the shape of the system, the trade-offs the team made, the places where pragmatism beat principle. It was how you became someone who could be trusted with hard problems.

Now we let the model read for us, summarize what it found, and feed us the highlights. The summary is fast. It's also wrong in ways you won't notice until production.

You can't outsource the building of your own mental model. The output looks the same - code, docs, decisions - but the engineer at the end of the loop is hollow.

Faster, at what cost

I'm not anti-AI. I built an entire product in a week using only AI agents. I use these tools every day. They make me dramatically more effective when I treat them as a tool, and dramatically more useless when I treat them as a substitute.

The difference isn't subtle. It comes down to one question:

After this work is done, do I understand more, or less?

If I'm using AI to plow through boilerplate and then I read what it produced, push back, and integrate it into a system I understand - I understand more. The AI compounded my judgment.

If I'm using AI to ship a feature in a domain I haven't bothered to learn, in a part of the codebase I haven't bothered to read, with a review that another model rubber-stamped - I understand less. The AI replaced my judgment.

Both ship features. Only one of them produces engineers.

The career problem nobody is talking about

Junior engineers are getting hit hardest, and they don't realize it yet.

The path from junior to senior was always about reps. Reps reading bad code. Reps debugging your own mistakes. Reps owning a piece of the system long enough to deeply understand it. That's how you build the intuition that lets you make calls under uncertainty.

Skip the reps and you skip the intuition. You become someone who can prompt for output but can't tell when the output is wrong. The market doesn't need that role at scale. It already has the model.

Senior engineers aren't immune. The skills that got you here decay if you stop using them. Five years of "just ask Claude" and you'll find you can't reason about a system the way you used to.

Tools don't take your job. Atrophy does.

Holding onto ownership

A few things I try to hold onto, regardless of how much AI I'm using:

Read the code that ships under your name. All of it. If the diff is too long to read, the diff is too long to merge.

Be able to explain it without the model. If you can't walk a teammate through your change in a 1:1 without pulling up Cursor, you don't understand it yet.

Do the thinking before you prompt. Use the model to execute, not to decide. Decisions made inside a chat window tend to optimize for what sounds good inside a chat window.

Own the bugs. When something breaks, debug it yourself first. Not because the model can't help - but because the debugging is where the learning lives.

Pick problems the model can't solve for you. The hard parts of engineering have never been syntax. They've been judgment, taste, and context. Those are still yours to develop.

Where this leads

I don't know what software engineering looks like in five years. I do know that the engineers who keep thinking will be massively more valuable than the ones who quietly stopped.

Intelligence is becoming cheap. Judgment, taste, ownership, and the willingness to actually understand what you ship - those are getting more valuable, not less.

The tools are incredible. Use them. Just don't let them think for you.