My AI agent has more commits than me (and I'm at peace with it)
Last week, a coworker sent me a screenshot of my GitHub profile. "Bro, you okay? You've got 47 commits this week."
What he didn't know is that 40 of those commits were made by Claude Code while I was drinking coffee and contemplating my existence.
My first instinct was to feel ashamed. Like getting caught cheating on an exam. Except there was no exam, nobody was evaluating me, and technically the code worked better than anything I would've written alone at 3 AM with my third Red Bull coursing through my veins.
Welcome to 2026, where your AI agent has a better contribution history than you, and that's... fine?
The day I stopped measuring my worth in lines of code
A few months ago, I was optimizing an API that handles millions of requests daily. Government system, sensitive data, the kind of project where a mistake means someone can't renew their ID and I'm probably going to receive emails in ALL CAPS.
I had an endpoint responding in 800ms. Unacceptable.
The previous version of me—the "pure" developer who refused to use anything he hadn't written himself—would've spent three days reading PostgreSQL documentation, experimenting with indexes, and eventually implementing a mediocre solution that worked "well enough."
Instead, I opened Claude Code and wrote:
This endpoint takes 800ms. The query uses JOIN on three tables.
I need to get it under 100ms. Here's the schema.
Fifteen minutes later, I had a solution with composite indexes, a query restructure that avoided a sequential scan I didn't even know existed, and a suggestion to implement Redis caching that reduced database load by 73%.
Final response time: 45ms.
You know how many lines of code I wrote? Four. The prompt.
You know how much I care? Absolutely nothing.
"But you didn't write that code"
Correct. I also didn't write the TypeScript compiler, or the Node.js runtime, or the PostgreSQL driver, or the Linux kernel that runs all of this.
Does that make me less of a developer?
Because if the answer is yes, then the only "real developer" is someone programming in Assembly directly on silicon while growing their own transistors in the backyard.
The industry has spent decades building layers of abstraction precisely so we don't have to reinvent the wheel every time. React, NestJS, TypeORM, Docker—they're all ways of saying "someone else solved this problem, I'm going to focus on the next one."
AI is simply the next layer.
A layer that sometimes hallucinates npm packages that don't exist and suggests using leftpad without irony, but a layer nonetheless.
Impostor syndrome on steroids
Here comes the uncomfortable part that nobody wants to admit.
When an AI agent generates code that works, and you don't completely understand how it works, something primitive activates in your brain. A little voice that says: "You're a fraud. You don't deserve that salary. Any day now they're going to find out."
Impostor syndrome, but now with tangible evidence that there's actually something that knows more than you.
The problem is that little voice is measuring the wrong things.
Last week, Claude generated an authentication middleware for an API. Clean code, error handling, perfect TypeScript types. But there was a problem: it assumed all tokens came in the Authorization header when our legacy system also accepts them as query parameters for reasons I'd rather not remember.
You know who caught that problem? Me.
You know who understood the system context, the historical decisions, and the consequences of breaking backward compatibility? Me.
You know who had to explain to the product manager why we can't just "fix it properly" without a migration plan? Also me.
AI can write code. It can't navigate organizational politics, understand three years of technical debt, or decide when it's the right time to refactor versus when you need to stick with the ugly solution because there's a real deadline.
That's still human work. And it turns out that's the work that actually matters.
What actually measures if you're a good developer
After months of using Cursor, Claude Code, and enough AI tools to justify a monthly bill I'd rather not show my partner, I reached a conclusion:
AI doesn't hide your weaknesses. It amplifies them.
If you don't understand architecture, AI will generate code that works today and explodes in production tomorrow. If you don't know how to ask the right questions, you'll receive wrong answers. If you don't have the judgment to evaluate solutions, you'll implement the first suggestion without questioning if it's the best one.
I've seen junior developers use ChatGPT to generate code that technically compiles but violates every business rule in the project. I've also seen senior developers use the same tools to solve in hours what used to take weeks.
The difference isn't in the tool. It's in who's using it.
A hammer doesn't make you a carpenter. But a carpenter with a good hammer builds houses faster than one chopping wood with their bare hands.
The plot twist nobody saw coming
The greatest irony of this whole AI revolution is that it's forcing us to be better engineers, not worse.
Before, you could survive by memorizing syntax, copying code from Stack Overflow, and googling errors until something worked. Now, when AI can do all of that faster than you, the only thing left is what AI can't do:
- Understand the real problem behind the technical problem
- Design systems that scale and are maintainable
- Make decisions with incomplete information
- Communicate technical solutions to non-technical people
- Know when the "correct" solution is actually the wrong one for your context
Basically, the skills that always should have mattered but that the industry insisted on ignoring because it was easier to measure "years of experience with React."
My new mental framework
I've stopped asking myself "Did I write this code?" and started asking "Do I understand why this code solves the problem?"
If the answer is yes, it doesn't matter who wrote it.
If the answer is no, I have work to do before merging.
Yesterday I spent two hours reading the documentation for a library that Claude suggested I use. Not because anyone forced me, but because I wanted to understand the trade-offs. Understand when it's the right choice and when it's overkill. Understand what could go wrong.
That's being a developer in 2026. It's not writing every line. It's being responsible for every line, whether you wrote it or not.
The future belongs to those who stop counting commits
If you still measure your productivity in manually written lines of code, I have bad news: that metric was always absurd, and now it's also obsolete.
The developer of the future—which is actually already the present—is the one who knows how to orchestrate. The one who understands the big picture. The one who can take the pieces AI generates and assemble them into something that works, scales, and won't wake you up at 3 AM with a Datadog alert.
So yes, my AI agent has more commits than me.
I also have a system in production that handles millions of requests, responds in milliseconds, and hasn't gone down in months.
Who gets the credit? Honestly, I don't care. The code works. The users are happy. The team can sleep peacefully.
If that's not being a developer, then I don't know what is.
TL;DR for those who scrolled to the bottom
- Using AI isn't cheating. It's the next layer of abstraction, like frameworks, high-level languages, and before that, compilers.
- AI amplifies what you already are. If you're good, it makes you faster. If you don't understand what you're doing, it gets you into trouble faster.
- A developer's value was never in typing. It's in decisions, context, and the ability to solve real problems.
- Stop counting lines of code. Start counting problems solved.
- If your AI agent has more commits than you, congratulations: you're probably spending your time on what actually matters.
No spam, no sharing to third party. Only you and me.
Member discussion