pull down to refresh

I listened to that podcast a few years ago, before the ChatGPT launch.

I really encourage you to listen to it.

Two quotes from it:

  1. "Most real systems can't be specified. If you could specify them then they are dead."
  2. "[about Lisp] You stop thinking about the language because the language is simple. What you are thinking is about the problem."

Questions tied to the quotes:

  1. Does this mean prompting software can only take you so far?
  2. It was in 2021. Many would agree this is happening to English, not just Lisp, thanks to AI coding agents. Do you agree?
50 sats \ 3 replies \ @k00b 14 Jan
  1. I don't know what you mean by this. I sense he means that complexity is inherent in interesting systems.
  2. Yes, improvements to the ease of programming, orients one more toward the problem. Lisp is very low on syntax, so the focus is on the semantics of a solution to a problem. Likewise, LLMs do something similar, but go a step beyond and handle the semantics of the solution pretty well, so the focus is on specifying the problem well.
reply

If interesting software can't be fully specified and LLMs/AI-coding-agents are spec to implementation kind of tools, then prompting won't be enough to build these kind of software.

reply
50 sats \ 1 reply \ @k00b 15 Jan

Oh I see. Specs may not be enough to describe an interesting system fully, but humans close the gap by “decompressing” a spec into an interesting system. LLMs appear to be capable of decompressing prompts in some way too.

I think what this means for LLMs is that any prompt, like a spec given to human, is not enough to determine/predict the output if the output is complex enough.

Then again, the clojure guy is talking about humans writing specs. Maybe if you prompt LLMs to write specs, they can create complete specs of interesting systems.

reply
aybe if you prompt LLMs to write specs, they can create complete specs of interesting systems.

I can't wait to live into the future and see how this turns out.

reply