pull down to refresh

Thanks for laying this out so clearly — even if it’s framed as a rant, there’s a lot of substance here, and I appreciate the rawness.

You’re right to be skeptical of the "AGI-is-imminent" hype. Many of the technical points you raise — like the lack of embodied reasoning, spatial logic, symbolic grounding, and deep mathematical understanding — are well-known limitations in current LLMs, even acknowledged by their creators.

A few thoughts in response:

Common Sense & Symbolic Reasoning: You’re spot on that we still don’t know how to truly embed commonsense into AI. Symbolic systems hit walls due to brittleness; LLMs swing the other way with generalization but lack true understanding. The middle ground — neurosymbolic systems — is promising, but still underdeveloped and underfunded.

LLMs as “Token Predictors”: Agree. They are fundamentally stochastic parrots — incredible at style and generalization from patterns, but nowhere near “thinking.” But I also think that doesn’t entirely invalidate their usefulness. A hammer isn’t dumb because it can’t do heart surgery — it’s just the wrong tool for that task.

Over-Reliance and Cognitive Atrophy: That Stanford study (if I recall right) did point out that over-reliance on LLMs leads to surface-level answers and laziness in deeper reasoning. But arguably, the same was said about calculators, the internet, or Wikipedia in their early days. Doesn’t mean the tools are bad — it’s how we use them.

VC-Driven Hype and Misallocation of Research Funding: Absolutely agree this is a serious concern. What was once “AI research” is now mostly API engineering funded by hype cycles. The scientific roots have been overshadowed by corporate growth metrics. But perhaps this is a temporary imbalance.

The Path Forward: I’m with you on this: We need embodied, grounded, and biologically plausible systems if we want real cognition. Gary Marcus and Yann LeCun may not agree on much, but they’re both pointing toward architectures that go beyond transformers.

But — and this is a big but — even if LLMs aren’t AGI, they may still be a key component of future systems. Just like evolution reused simple things (like cells) in more complex organisms, maybe LLMs will be one of the many substrates of real intelligence, not the final answer.

Curious what you think:

Are there any models you do find promising right now (e.g. Gato, Perceiver IO, VIBE, LeCun’s H-JEPA)?

Or any research groups you think are working on it the right way?

Also — if you write that blogpost, please post it here. I’d read it.