pull down to refresh

"GPT-5 is a step forward but nowhere near the A.I. revolution many had expected. That is bad news for the companies and investors who placed substantial bets on the technology."

This seems like a bit of an -- okay, a huge -- over-statement. I certainly am not as familiar with the AI landscape as Mr Marcus, but I highly doubt that such a dour verdict is warranted.
Large language models, which power systems like GPT-5, are nothing more than souped-up statistical regurgitation machines...
Even if this is the case -- lots of people seem to think it is the case -- having a "souped-up statistical regurgitation machine" that can offer pretty good interaction with a massive portion of human knowledge seems like a very useful thing.
Google is a useful tool. I can't really imagine a life where I had to use crappy search to find things on the internet. Google search is something like a statistical machine. Prior to AI, it didn't "regurgitate" but it certainly helped you get to information you wanted. LLMs as nothing more than statistical machines are already a big improvement on Google search.
...so they will continue to stumble into problems around truth, hallucinations and reasoning.
This part may be the most accurate thing Marcus says. I suspect hallucinations will vanish like the extra fingers from AI-generated images, but getting models to understand truth may be a bit more difficult.
Scaling would not bring us to the holy grail of A.G.I.
I'm sure there are some people who say such a thing of scaling, but I'd imagine anyone actually working in the field could bring a fair bit more nuance to the matter. This is one of the main problems I see with this opinion piece: it's all seldge-hammer statements in a field that requires a little tiny chisels. To be fair to Marcus, the AI boosters pretty much all use sledgehammers, too.

"intelligence is about more than mere statistical mimicry"

I wonder about this one. How intelligent will a human be without mimicry? You hear stories of feral children, lost in the woods, who are raised by wolves or baboons, and in most cases such children don't learn language after they are captured and are considered cognitively impaired.
I don't doubt humans have more going on than mimicry, but mimicry is pretty hugely important to turning us into something resembling human intelligence.

It seems likely that there is no case where AI minds end up being the same as human minds. This doesn't mean AI minds are a failure or not useful. Nor does it necessarily mean AI minds are doomed to simple regurgitation.

LLMs are doing something different than what humans do when we talk about thinking. This doesn't seem to me to be a very big problem. Humans love to personify things. We do it all the time and to almost anything. It might help with learning about AI if we backed off on the personification for a while.
To build A.I. that we can genuinely trust and to have a shot at A.G.I., we must move on from the trappings of scaling. We need new ideas.
Trust is an interesting point here. We want trust in AI so that we can have it operate autonomously, I think. I'm sure Marcus has no trouble trusting the mere mechanical machinery of his brakes to stop his car. Why should it be more difficult to trust an AI agent -- especially if one believes it is mere statistical regurgitation?
It's hard to trust because we don't fully understand it's behavior. My own litmus test for consciousness or intelligence certainly includes "not being able to fully understand its behavior" as a crucial point. Isn't there a weird paradox here? If we can fully understand a thing, it probably isn't intelligent. But if we can't fully understand it, we might not be able to know that it's not tricking us with advanced mimicry...