pull down to refresh

The higher the level of abstraction/language you use, the more vague it becomes, unless it has a carefully detailed specification for what it actually does at a low level.

Spoken language is the highest level we have, and in order to use that precisely to code, you'd need to specify precisely what each word means in terms of lower level code (e.g. python). And the meaning of each word changes based on the other words used with it.

Importantly meaning can evolve too.

Often you want to specify something without caring about the precise meaning - as long as the jist has been understood. You're happy for the details you haven't mentioned to just be "best practice".

LLMs solve this problem by very cleverly naturally following the currently recognised meaning of spoken language in terms of low-level language. They very quickly figure out what you are most likely to mean when you specify something using the spoken language and fill in the details with sensible defaults, sometimes :-)

Converting spoken requirements into code is a problem the field of software has been wrestling with for decades, and LLMs are the latest solution.