“The New Code” by Sean Grove at OpenAI – a charitable version of “writing the last piece of software” marketing by Lovable – says programmers will be replaced with essentially really thorough PMs who are methodical at prompting:
In traditional programming, source code is sacred. The binary is disposable.
The specification contains the original intent. The code? It’s just a “lossy projection.”
With the conclusion that in the “near future”, source code will end up like binaries – they are there, but nobody reads them and the source of truth is the really good spec expressed as the AI prompt.
Sounds appealing, but three problems:
- Binaries are precise and functionally deterministic, they are actually really not a “lossy projection” at all. If you run even a complicated piece of code through the compiler, you will get the same result as a binary. With AI, you will get a different hallucination each time depending on the day of the week and the version of your model. So if you managed to AI-generate a version of the source code that works, if you don’t treat that source code as the source of truth and discard it, good luck generating the same thing next time! Binaries of the pre-AI time and the source code in AI-only approach do not have the same properties.
- Nobody debugs binaries anymore aside from very small group of performance and security specialists. However, in the AI-only approach when something breaks in the source code, someone will need to roll up their sleeves and scrutinize the source code and understand how it works and how to fix it. Assuming you can just throw more LLMs at every weird production fix is currently a huge stretch, they are just not that smart – if your code base is anything bigger than one service or one fullstack CRUD app.
- Also, in my day-to-day, GPT5 and Sonnet still sometimes produce garbage that is hard to use efficiently – more time is spent tinkering with the agent settings and LLM lying than actually moving the product forward. Yes, other people report differently. There are real – not the AI-salesbro-type -people out there that rely on a lot of generated code in production applications, especially for UI work. I think it’s often greenfield stuff generated by someone who is really really good at code reviewing.
In my day-to-day (mostly backend fintech work), most time is spent not coding, but really understanding some arcane bank formats and protocols from the 60’s (not about to change, banks), getting on the phone (sometimes literally on the phone) with them to understand what broke, deciphering crazy card swipe patterns by the merchants where multiple parties and middlemen of varying trustworthiness are involved, and when it comes to development, writing code that spans multiple weird scantily documented services, each with weird bureaucratic access controls.
If you program in one stack, in one brand new service, things like Laravel Boost MCP connector (to borrow an example from another world) can potentially take out a ton of legwork, but if you are building something that goes a bit beyond that, AI stuff seems to just fall short, as of summer 2025, not for the lack of trying.
In summary, the path from today to a situation where source code is rarely touched by a human is not yet obvious.

