pull down to refresh

@tonyaldon
stacking since: #110937
If interesting software can't be fully specified and LLMs/AI-coding-agents are spec to implementation kind of tools, then prompting won't be enough to build these kind of software.
I listened to that podcast a few years ago, before the ChatGPT launch.
I really encourage you to listen to it.
Two quotes from it:
- "Most real systems can't be specified. If you could specify them then they are dead."
- "[about Lisp] You stop thinking about the language because the language is simple. What you are thinking is about the problem."
Questions tied to the quotes:
- Does this mean prompting software can only take you so far?
- It was in 2021. Many would agree this is happening to English, not just Lisp, thanks to AI coding agents. Do you agree?
Since I started using GPT-5.2 straight from the OpenAI API, the responses feel less sycophantic. The experience is better because of that.
For more context, I'm writing some tests. Handling all the edge cases requires care. It feels like I'm almost writing the whole test anyway, just with less precision if I prompt them. Today, it feels faster to write them myself.
I can't wait to live into the future and see how this turns out.