Claude, Cognitive Regression

There’s a gold rush happening right now. Everyone’s scrambling to “learn AI” — the message is loud and clear: get on the train or get left behind.

Anthropic recently published a study that put hard numbers to something many of us have been feeling. They ran an experiment where developers learned a new Python library — half with AI assistance, half without. The AI group finished a bit faster, but scored the equivalent of nearly two letter grades lower on comprehension tests afterward. The speed came at a real cost to understanding, especially around debugging. This isn’t a fringe concern anymore — even the companies building these tools are flagging the tradeoff.

On one side, you have the refusers — people who treat AI like a fad, convinced that if they just ignore it hard enough, it’ll go away. On the other, you have the surrenderers — people who’ve essentially replaced their own thinking with an AI subscription, copy-pasting their way through work they used to understand deeply. I’ve spent the last month deep in AI-driven coding, and I think both extremes miss the point. Here’s my take.

The Offloading Problem

Cognitive offloading isn’t new. We’ve been doing it for decades, and mostly we’ve been fine. GPS killed our ability to navigate cities from memory. Calculators made mental arithmetic a party trick rather than a survival skill. Smartphones mean nobody memorizes phone numbers anymore. Each time, we traded a cognitive ability for convenience, and each time, the trade felt worth it.

AI is the latest entry in this lineage — but it’s different in a way that matters. GPS offloaded a narrow skill. Calculators offloaded mechanical computation. AI offloads thinking itself: pattern recognition, problem decomposition, reasoning through ambiguity, synthesizing information across domains. These are the core machinery of how knowledge workers create value. When you offload navigation, you lose your sense of direction. When you offload thinking, what exactly do you have left?

I’ll be honest about my own experience. After a month of heavy Claude CLI usage — and I mean heavy, multiple sessions a day, sometimes three terminal windows running simultaneously — I’ve started paying attention to how my brain approaches problems. The “muscle” that used to kick in when I hit a hard bug or a complex architecture decision has gotten softer. Not gone, but softer. I catch myself reaching for the AI before I’ve even finished formulating the problem in my own head. For example, I use Claude pretty heavily for research, where I provide the points to investigate but I no longer go through the hard thought process of recognizing real knowledge vs noise during research. My cognitive endurance — the ability to sit with a hard problem for an extended period — weakens when AI gives me instant answers. My pattern recognition dulls when I stop building my own mental models. When AI always sprints for you, your stamina for sustained thinking quietly withers.

Your AI Is Your Team

Here’s the thing, though — this isn’t actually a new problem. It just feels new because of the AI technology wrapped around it.

Anyone who’s led a team of strong engineers has faced the exact same tension. When you move into a leadership role, you’re suddenly surrounded by people who can execute faster than you in most areas. The temptation is to delegate everything. And for a while, it works great — the team’s throughput goes through the roof, and you feel like a genius for assembling such talented people. But a year or two in, you realize something uncomfortable: you’ve been hands off for too long and can’t articulate the technical tradeoffs with real depth. You’ve become a project manager wearing an engineering title. The flip side is equally broken — the leader who refuses to delegate, bottlenecks the entire team and wastes the talent they hired.

I believe the best engineering leaders solved this by being intentional about which work they kept for themselves. They’d stay hands-on in the areas that mattered most for their judgment — the critical architecture decisions, the gnarliest debugging sessions, the parts of the system where deep understanding meant the difference between a good call and a catastrophic one. Everything else, they trusted to their team. AI is the same dynamic with a different “team member.” The question isn’t whether to use AI — that ship has sailed. The question is which cognitive work you keep for yourself, and which you hand off.

Think First, Then Collaborate

So what does a healthy relationship with AI actually look like? I think it comes down to one principle: AI should extend your thinking, not replace it.

The practical version of this is simple but requires discipline. Before you open the AI tool, think first. Form your own hypothesis. Write your rough analysis — even if it’s messy, even if it’s wrong. Identify what you actually need help with versus what you could solve with more time and effort. The goal isn’t to prove you don’t need AI; it’s to make sure your own cognitive machinery stays engaged. Then bring AI in — not as an oracle that hands you answers, but as a collaborator that stress-tests your thinking, fills gaps in your knowledge, and accelerates execution on the parts that don’t require your deepest engagement.

The other piece is knowing when to toggle intensity. Not every task deserves the “think-first” treatment. Some work is pure execution — boilerplate code, data transformations, refactoring tasks — where AI collaboration at full speed is the right call. The new skill here is knowing which mode a given problem calls for. Hard architectural decisions? Think first, AI assists. Migrating a hundred files to a new format? Let AI drive.

The Long Term Effect

The real danger isn’t AI itself. Tools are neutral; what matters is the pattern of use we build around them. And the insidious thing about cognitive atrophy is that it’s invisible in the short term. You won’t notice your thinking getting shallower this week, or this month. It happens over years, slowly enough that you can always rationalize it. “I’m being more efficient.” “Why waste time on what AI can do?” By the time the decay is obvious, it’s hard to reverse.

The people who will thrive in an AI-saturated world aren’t the ones who use AI the most, and they certainly aren’t the ones who refuse to use it at all. They’re the ones who use AI to amplify their thinking while deliberately — almost stubbornly — preserving their ability to think without it. We’re in the early days of figuring out what that looks like — there’s no playbook yet. But the direction is clear: not rejection, not surrender. Intentional integration. Our own brain is the one we’ll be working with for the next thirty years. That’s worth being deliberate about.