pull down to refresh

It feels like OP is hinting at something important, but reading the short blogpost, I'm but sure what exactly. How did you interpret it?

reply

I'm reading it somewhat like this...

  1. Millions people need to write a function that can accomplish task X.
  2. The cognitive burden of doing that used to be heavy, even with modern programming languages.
  3. LLMs have reduced that cognitive burden, but it does so by acting like a middleman that transforms our natural language into a modern programming language.
  4. OP is potentially asking a few things
    • why can't the programming language itself be more adapted to natural language expressions? Why use a middleman that uses expensive energy and machinery to translate our natural language into a formal programming language?
    • for the most common tasks, why can't a natural language transformation to machine code be built-in to the system, rather than requiring expensive, external LLM calls for millions of people every time?

@k00b and @optimism might be interested in this discussion.

reply
32 sats \ 1 reply \ @k00b 4 Apr

I haven't read OP but I'd guess the answer is:

  1. LLMs are nondeterministic (although it might be possible to make them deterministic)
  2. Code is already a human language (detailed) spec
  3. Inference of frontier LLMs, ones large enough that they can translate human language into programs as reliably as code can, require above average hardware
  4. LLMs aren't as hands-free as people talk about them being

reply

the OP itself is worth a read, it's only like two paragraphs, not even a full blog post

reply

To follow along my own thoughts, this line of inquiry could explain why I like python so much. It provides native support for higher level data structures like dicts in a way that feels natural. I think if you wanted to implement a dict in C++ for example, you'd have to do it by hand (iirc)

reply
10 sats \ 0 replies \ @anon 4 Apr

The higher the level of abstraction/language you use, the more vague it becomes, unless it has a carefully detailed specification for what it actually does at a low level.

Spoken language is the highest level we have, and in order to use that precisely to code, you'd need to specify precisely what each word means in terms of lower level code (e.g. python). And the meaning of each word changes based on the other words used with it.

Importantly meaning can evolve too.

Often you want to specify something without caring about the precise meaning - as long as the jist has been understood. You're happy for the details you haven't mentioned to just be "best practice".

LLMs solve this problem by very cleverly naturally following the currently recognised meaning of spoken language in terms of low-level language. They very quickly figure out what you are most likely to mean when you specify something using the spoken language and fill in the details with sensible defaults, sometimes :-)

Converting spoken requirements into code is a problem the field of software has been wrestling with for decades, and LLMs are the latest solution.

reply