pull down to refresh

I've been seeing a number of these articles lately, bemoaning how awful and incorrect llms are. It's unclear to me whether llms are actually just exposing how much we rely on each other for information or whether we are for some reason particularly likely to trust the outputs from llms (despite frequently hearing how unreliable they are).
We've already pushed the "my llm told me it was okay to eat the mushroom and I nearly died" story into urban legend.
Mechanically speaking, LLM's are literally just auto-complete. They cannot reason, they cannot create new information, they can't even verify information, literally all they do is approximate the next characters ("output tokens") based on tokens they've already read.
To the extent they are "getting worse" is because of recursion to emulate thinking and reasoning, stretching out the approximations even further (increasing entropy).
This is fact not opinion, there's really nothing more to discuss about their abilities. Therefore, all the articles in the world about the limitations/possibilities/future of AI that are not rooted in understanding of this fact are noise.
So why do most people treat them as magic? What's the grounds for the hype? And why so many indignant articles about how retarded they are?
The author seems to understand this, so who is he preaching to?
I think the beef isn't with LLM's, but the expectations that have been set by people in the industry as they push products in search of use-cases to recover the fire hose of money thrown at them.
It's another technology hype-bust cycle, one disillusioned author at a time, lamenting and coping that it can't do what they believed it could.
reply
imo, we've already crossed the peak of inflated expectations but haven't yet crossed the trough of disillusionment.
LLMs are amazing for certain tasks, and they're gonna be world changing, but it's not some existential threat, nor will it solve all the world's problems, as some would seem to believe
reply
I agree with your assessment, nice chart.
I do think there are some people starting to share my epiphany in that AI is the new UI. The shift in investment from foundational models to wrappers is indicative of this, plus all the helpers being baked into chat bots and things like MCP... LLM's are just a new way of calling real API's without a front-end designer to mediate that with the user. That's the enlightenment imo.
reply
I’ve said this before, even to some stubborn friends, AI is just a ‘simple’ LLM. The intelligence part still has to be proven.
reply