The Week AI Broke My Confidence But Made Me a Better Engineer

The week of February 9th, 2026 was the time when I fell into a quiet existential crisis.

The Week AI Broke My Confidence But Made Me a Better Engineer

The week of February 9th, 2026 was the time when I fell into a quiet existential crisis.

I was fearful for my short career in tech because I witnessed first hand how superior the AI models are at coding, and I was increasingly spending more than at prompting rather than coding manually. Models were now writing code faster than I could. They navigated large codebases with an ease that would take me hours. Increasingly, my job was shifting from writing code to prompting systems that wrote the code for me.

I remembered the whole team, including my manager, were thrown into frantic mode as we didn’t know how to react to the lightning fast changing pace of our career. Managers started coding with Claude and engineers overloaded our internal channels with posts sharing their awesome AI workflows. Everyone was trying to figure out the same thing, and that is what does it mean to be an engineer when machines can code better than you?

Ironically, I had always considered myself an early adopter of AI. At one point I was subscribed to three different model providers and experimenting with every new tool that appeared. I genuinely wanted to understand how AI could improve my work.

I have always been confident in my abilities. There is a smirkiness in me convinced that I would always be able to adapt to changes in tech. Adaptation has been a requirement for the job. Doomy news didn’t even fade me. With the onset of Opus 4.5, everything changed. AI could code better than me, and find code pointers and understand the whole codebase faster than I could ever do. I now started using nvim and tmux more and didn’t open VS Code as often. At the same time, my productivity skyrocketed and I was shipping more code than ever.

A Mirror to My Weaknesses

This brings me to another revelation, I am a builder, but I have never been a good executor. I came up with good ideas, but more often than not abandoned them midway when a technical challenge comes up and my energy is consumed by a full time job. My project only get as far as building a half-baked backend and clunky UI. They usually came to a halt after I purchased a domain name.

With Opus, I could focus on strategy and intent, while letting agents handle much of the tedious implementation. Instead of being blocked by unfamiliar tools, I could describe what I wanted and iterate quickly. Its strengths compliment my weaknesses. I am no longer constrained by my knowledge and can move faster and wider than ever.  I am now shipping more code in the first two months of 2026 than I did in the whole year of 2025.

my github commit tally for first 3 months for 2026
my github commit tally for 2025

For the first time, many of my projects actually made it past launch day.

The Power of Faster Failure

I started asking myself

“Would I rather spend one day building a prototype and discover the idea doesn't work — or spend three months building it and learn the same thing?”

I know my answer. With AI, my idea can go wider into territories that I didn’t explore before simply because I didn’t want to spend time learning a new language or a framework. For instance, building an iOS app has always been something I want to do, but building a Swift app isn’t really relevant to my backend and ML background.

"Everyone Can Build Software Now"

A common claim on X these days is that everyone can build software now. That’s true, but I don’t believe everyone would want to make their own software. Though the entry barrier has been reduced significantly, it still requires you to have some basic engineering knowledge to build it. An analogy I often think about is writing. Everyone knows how to write, but how many people want to write a book?

AI only exacerbates the gap between the competents and the incompetents, not narrowing it. If you have always been a builder, you can build and iterate faster than ever. Most people, even engineers at FAANG, don’t want to become a 10x engineer, they just want to do their job and go offline. Not everyone will become AI native engineer. When you are a doctor, or most other professions, what you learn in school can be relevant long after you graduate. Seldom that you have to update your knowledge base. Adaption has always been the unique trait of being an engineer. That’s why we are rewarded with the generous compensation packages that other industries can only be envious of. The nature of our career is still the same as ever. New tools have come out every couple of years. From jQuery to React and NextJS, or Java being overthrown by JavaScript and Python in popularity and practicality. Learning new tools is a requirement of the job. AI has essentially solved coding per Borris Cherny, but coding isn’t engineering. Engineers are getting paid for the productivity output, not for our magic to manipulate 0s and 1s. AI models have helped our jobs become easier, so now we can move on to focus on the important part, which is deciding what to build. Until the day that AI can solve decision making and perfectly build what is thinking precisely, I am still an AI maximalist.

A Strange Conclusion

Ironically, the technology that initially made me fear for my career has made me a far better engineer.

My output has increased at least two to three times.I experiment more.I ship more.

And most importantly, I spend more time thinking about the decisions behind the code, not just the code itself.