When Your Core Skill Becomes Free
Andrej Karpathy says it hurts the ego. He describes a state of psychosis. After twenty years of writing code, his core skill is suddenly free and instant. Many senior practitioners are quietly processing the same thing.
Two Things Worth Holding Together
Andrej Karpathy, the former director of AI at Tesla and a founding member of OpenAI, said two things recently that are worth holding together.
In January, reflecting on coding with LLMs, he wrote that it “hurts the ego a bit” and that some code that used to be “a point of pride” now feels “free and instant.” Then, on the March 20, 2026 episode of No Priors, he told Sarah Guo he had not typed a line of code since December and described himself as being in a “state of psychosis” trying to figure out what is now possible.
Taken together, those remarks capture something a growing number of experienced practitioners are starting to feel: not replacement, but disorientation.
Karpathy is more productive than ever. He built an autonomous research system that ran roughly a hundred experiments overnight and surfaced tunings he had overlooked despite twenty years of experience. He created a home automation agent he calls “Dobby the elf claw” that let him stop using “like six apps” because it controlled them all in natural language. He is shipping faster, building more, and reaching further than at any point in his career.
And he describes the experience as “psychosis.”
The Feeling Nobody Talks About
There is a conversation happening quietly in many engineering teams and data organizations. It is not the conversation about productivity gains or cost savings or headcount. It is the other conversation. The one about what it feels like when the skill you spent your career building becomes something an agent does in seconds.
I felt it myself this year. I was actively learning front-end development: React, Next.js, the whole ecosystem. I had set aside time each week to work through tutorials and build small projects. It was a deliberate investment in a skill I did not have.
Then I started using Claude Code. Within days, I realized Claude Code could write high-quality front-end code faster and more idiomatically than I would manage after months of learning. The skill I was mid-way through acquiring had become free while I was still paying for it.
That was a stranger feeling than having an existing skill automated. I was not mourning something I had built over years. I was watching a skill depreciate in real time, during the exact weeks I was investing in it. The rational response was clear: stop learning to write front-end code and start learning to ship products with it. Focus on the product judgment, the design decisions, the user experience choices that Claude Code cannot make for me. Let the agent handle the implementation.
I made that shift. It was the right call. But the feeling of abandoning a skill you are actively building is different from the feeling of watching an old skill get automated. It is more disorienting because you cannot even tell yourself “I had a good run.” You barely started.
The Identity Trap
The disorientation has a structure. It follows a pattern I have seen in my own career and in the careers of people I mentor.
Stage 1: You define yourself by what you can do. “I am a data architect.” “I write Python.” “I design pipelines.” The skill is the identity. The years of practice are the credential.
Stage 2: The skill becomes automated. Not fully, not perfectly, but enough. An agent writes the pipeline. Another agent designs the schema. A third one generates the SQL. The output is 80% correct, and correcting the remaining 20% takes a fraction of the time that building from scratch used to take.
Stage 3: The identity crisis. If the agent writes the code, what does the coder do? If the agent designs the pipeline, what does the architect do? The skill that defined you is no longer scarce. And scarcity was a significant part of its value.
Stage 4: The reframe. This is where Karpathy is right now, and where many senior practitioners will eventually arrive. The center of gravity is shifting from manual execution toward judgment, system design, and verification.
What Karpathy Actually Does Now
Listen carefully to what Karpathy describes as his current workflow. He is not idle. He is:
- Defining research objectives in a
program.mdfile - Setting constraints and evaluation metrics
- Running parallel agents across multiple workstreams
- Reviewing results and deciding which optimizations to keep
- Catching errors the agents cannot see (architecture tweaks that improve a metric but degrade something unmeasured)
He is not coding. He is exercising judgment. And that judgment, built on twenty years of knowing what good code looks like, what good research looks like, and what good engineering looks like, is the thing the agent cannot replicate.
“Everything feels like a skill issue. You just haven’t found a way to string it together.”
When Karpathy says agent failures are “skill issues,” he is saying that the human’s ability to direct the agent is the bottleneck. Not the model. Not the compute. The human’s knowledge of the domain, their sense of what “correct” looks like, their judgment about which trade-offs are acceptable.
The skill did not become worthless. It changed form. It used to express itself through syntax. Now it expresses itself through judgment.
The Parallel in Data Work
I am seeing this pattern in conversations with data professionals across the industry. These are illustrative composites, but the shape repeats: a senior data engineer who spent years mastering Spark now spends most of her time reviewing agent-generated pipeline code rather than writing it. A principal architect who used to design schemas by hand now evaluates whether agent-proposed schemas match the actual query patterns. A governance lead who used to write policy documents now evaluates whether AI-generated policies capture the nuance of their organization’s risk appetite.
In each case, the mechanical skill (writing Spark, designing schemas, drafting policies) has been partially automated. In each case, the human’s value has shifted to the same place: knowing whether the output is right.
The judgment-in-the-loop framework I wrote about earlier this week captures this precisely. The industry already had “human-in-the-loop.” But having a human in the loop is not enough. A junior analyst clicking “Approve” on an agent-generated report is human-in-the-loop. A senior practitioner who catches that the report uses last quarter’s pricing model is judgment-in-the-loop. The distinction is the twenty years of experience that Karpathy carries into every agent interaction.
The Grief Is Real. The Leverage Is Higher.
I do not want to dismiss the emotional dimension. Karpathy calls it psychosis. Others might call it grief. When a skill you spent decades building becomes “free and instant,” there is a legitimate loss. The craft had meaning beyond its output. The struggle to get something working, the satisfaction of elegant code, the pride in a system that runs without you: these were sources of identity, not just productivity.
That grief deserves acknowledgment, not a pivot to “but look how much more productive you are now.” Productivity is not the point. Identity is.
But identity can be rebuilt. And the new identity is more leveraged than the old one in many contexts. The person who can direct an agent effectively is rarer and more impactful than the person who can write the code the agent writes. Not because coding was easy, but because judgment scales in ways that execution does not.
Karpathy’s own trajectory proves this. He is not less valuable because he stopped typing code. He is building things that were impossible six months ago. The scope of what one person can accomplish has expanded dramatically. But only for people who bring judgment to the expanded scope.
The Question Worth Sitting With
“I want to be at the forefront of it, and I’m very antsy that I’m not at the forefront of it.”
That anxiety is the signal. It means you care enough about your craft to feel the shift.
The ones feeling the disorientation, the ego bruise, the quiet grief for a skill that used to be hard and is now free: those are the ones building the next version of their value. Because they understand, viscerally, what the agents are replacing and what they are not.
For the technical side of this shift, how agentic engineering changes data and AI work specifically, I wrote a companion piece: From Vibe Coding to Agentic Engineering.
The agents are replacing execution. They are not replacing the knowledge of what to execute, the judgment of whether it was done correctly, or the wisdom to know when “correct” is not the same as “right.”
That distinction is not a consolation prize. It is the whole game.
Stay in the loop
Get new articles on data governance, AI, and engineering delivered to your inbox.
No spam. Unsubscribe anytime.