Millions people need to write a function that can accomplish task X.
The cognitive burden of doing that used to be heavy, even with modern programming languages.
LLMs have reduced that cognitive burden, but it does so by acting like a middleman that transforms our natural language into a modern programming language.
OP is potentially asking a few things
why can't the programming language itself be more adapted to natural language expressions? Why use a middleman that uses expensive energy and machinery to translate our natural language into a formal programming language?
for the most common tasks, why can't a natural language transformation to machine code be built-in to the system, rather than requiring expensive, external LLM calls for millions of people every time?
@k00b and @optimism might be interested in this discussion.
Inference of frontier LLMs, ones large enough that they can translate human language into programs as reliably as code can, require above average hardware
LLMs aren't as hands-free as people talk about them being
To follow along my own thoughts, this line of inquiry could explain why I like python so much. It provides native support for higher level data structures like dicts in a way that feels natural. I think if you wanted to implement a dict in C++ for example, you'd have to do it by hand (iirc)
The higher the level of abstraction/language you use, the more vague it becomes, unless it has a carefully detailed specification for what it actually does at a low level.
Spoken language is the highest level we have, and in order to use that precisely to code, you'd need to specify precisely what each word means in terms of lower level code (e.g. python). And the meaning of each word changes based on the other words used with it.
Importantly meaning can evolve too.
Often you want to specify something without caring about the precise meaning - as long as the jist has been understood. You're happy for the details you haven't mentioned to just be "best practice".
LLMs solve this problem by very cleverly naturally following the currently recognised meaning of spoken language in terms of low-level language. They very quickly figure out what you are most likely to mean when you specify something using the spoken language and fill in the details with sensible defaults, sometimes :-)
Converting spoken requirements into code is a problem the field of software has been wrestling with for decades, and LLMs are the latest solution.
It feels like OP is hinting at something important, but reading the short blogpost, I'm but sure what exactly. How did you interpret it?
I'm reading it somewhat like this...
task X.@k00b and @optimism might be interested in this discussion.
I haven't read OP but I'd guess the answer is:
the OP itself is worth a read, it's only like two paragraphs, not even a full blog post
To follow along my own thoughts, this line of inquiry could explain why I like
pythonso much. It provides native support for higher level data structures likedicts in a way that feels natural. I think if you wanted to implement adictinC++for example, you'd have to do it by hand (iirc)The higher the level of abstraction/language you use, the more vague it becomes, unless it has a carefully detailed specification for what it actually does at a low level.
Spoken language is the highest level we have, and in order to use that precisely to code, you'd need to specify precisely what each word means in terms of lower level code (e.g. python). And the meaning of each word changes based on the other words used with it.
Importantly meaning can evolve too.
Often you want to specify something without caring about the precise meaning - as long as the jist has been understood. You're happy for the details you haven't mentioned to just be "best practice".
LLMs solve this problem by very cleverly naturally following the currently recognised meaning of spoken language in terms of low-level language. They very quickly figure out what you are most likely to mean when you specify something using the spoken language and fill in the details with sensible defaults, sometimes :-)
Converting spoken requirements into code is a problem the field of software has been wrestling with for decades, and LLMs are the latest solution.