pull down to refresh
I'm reading it somewhat like this...
- Millions people need to write a function that can accomplish
task X. - The cognitive burden of doing that used to be heavy, even with modern programming languages.
- LLMs have reduced that cognitive burden, but it does so by acting like a middleman that transforms our natural language into a modern programming language.
- OP is potentially asking a few things
- why can't the programming language itself be more adapted to natural language expressions? Why use a middleman that uses expensive energy and machinery to translate our natural language into a formal programming language?
- for the most common tasks, why can't a natural language transformation to machine code be built-in to the system, rather than requiring expensive, external LLM calls for millions of people every time?
reply
I haven't read OP but I'd guess the answer is:
- LLMs are nondeterministic (although it might be possible to make them deterministic)
- Code is already a human language (detailed) spec
- Inference of frontier LLMs, ones large enough that they can translate human language into programs as reliably as code can, require above average hardware
- LLMs aren't as hands-free as people talk about them being
reply
the OP itself is worth a read, it's only like two paragraphs, not even a full blog post
reply
To follow along my own thoughts, this line of inquiry could explain why I like python so much. It provides native support for higher level data structures like dicts in a way that feels natural. I think if you wanted to implement a dict in C++ for example, you'd have to do it by hand (iirc)
reply
It feels like OP is hinting at something important, but reading the short blogpost, I'm but sure what exactly. How did you interpret it?