I have this view that there is "physics of programming". The physics of programming is NOT the algorithms you use, or the syntax, or the runtime complexity or memory footprint of the code. These are just performance metrics or limits of the particular language you are using.
Rather, the physics of programming is the mental model you use to describe the code. The physics is often defined by the variable names you choose, organization and standardization of files or class names, the data structures you use, the comments you write, or even a paragraph or two in the README.md that explains something at a high-level.
The "physics of programming" is what often determines the long-term success of the code.
Can someone new approach the code and read it like a poem or a timely newsletter? Where every line makes you want to keep reading? Can a human tell themselves a story about what's actually happening in the silicon? Is that story clear enough that they can tell another programmer (or AI model) that same story from memory?
As requirements change, as new tech emerges, code with good physics will attract smart programmers to work on it and keep it relevant.
Sometimes this approachability is a bad thing. If a mental model is too simple, it can invite people to complicate it in a way that makes it worse. This is also why we sometimes encounter that one piece of spaghetti code that nobody ever touches, and it somehow works flawlessly for years.
Good physics comes from understanding how people think. How we reason about abstract ideas. The words we use to refer to these ideas -- it comes from understanding the limits of what humans can hold in their mind at one time. When we read code with bad physics, we call it spaghetti, or unmaintainable, or smelly, etc.
AI does not have limits like our. It's context window is huge, it's memory is vast. The stories it can hallucinate are endless. But it is still trained on data that was produced with human limits in mind.
Whether AI models write code with good physics or not is likely dependent on the prompt and context you give it. The AI is trained on lots of code with both good and bad physics. The AI knows the syntax, it knows the quirks of the language you're using, it knows how to write performant code, but the physics is not well-defined unless you give it a "mental model" to work with.
If AI is increasingly writing code and humans, less so. I would imagine that the physics of programming could shift to a point where spaghetti is everywhere, but nobody cares because the AI can reason about it just fine. Kinda like how (almost) nobody reads/writes Assembly or byte-code by hand anymore.
I have this view that there is "physics of programming". The physics of programming is NOT the algorithms you use, or the syntax, or the runtime complexity or memory footprint of the code. These are just performance metrics or limits of the particular language you are using.
Rather, the physics of programming is the mental model you use to describe the code. The physics is often defined by the variable names you choose, organization and standardization of files or class names, the data structures you use, the comments you write, or even a paragraph or two in the README.md that explains something at a high-level.
The "physics of programming" is what often determines the long-term success of the code.
Can someone new approach the code and read it like a poem or a timely newsletter? Where every line makes you want to keep reading? Can a human tell themselves a story about what's actually happening in the silicon? Is that story clear enough that they can tell another programmer (or AI model) that same story from memory?
As requirements change, as new tech emerges, code with good physics will attract smart programmers to work on it and keep it relevant.
Sometimes this approachability is a bad thing. If a mental model is too simple, it can invite people to complicate it in a way that makes it worse. This is also why we sometimes encounter that one piece of spaghetti code that nobody ever touches, and it somehow works flawlessly for years.
Good physics comes from understanding how people think. How we reason about abstract ideas. The words we use to refer to these ideas -- it comes from understanding the limits of what humans can hold in their mind at one time. When we read code with bad physics, we call it spaghetti, or unmaintainable, or smelly, etc.
AI does not have limits like our. It's context window is huge, it's memory is vast. The stories it can hallucinate are endless. But it is still trained on data that was produced with human limits in mind.
Whether AI models write code with good physics or not is likely dependent on the prompt and context you give it. The AI is trained on lots of code with both good and bad physics. The AI knows the syntax, it knows the quirks of the language you're using, it knows how to write performant code, but the physics is not well-defined unless you give it a "mental model" to work with.
If AI is increasingly writing code and humans, less so. I would imagine that the physics of programming could shift to a point where spaghetti is everywhere, but nobody cares because the AI can reason about it just fine. Kinda like how (almost) nobody reads/writes Assembly or byte-code by hand anymore.