Surely we can admit it displays multiple measures of intelligence and reasoning. I like the example Ilya Sutskever discusses with Jensen Huang- if you train an LLM on text and then feed it a mystery novel and at the end ask it "who is the murderer?" and it gets the answer right, is that not demonstrating that the model is fundamentally learning not just about guessing words but also developing an understanding and internal model of the outside world?
I think we also forget how stupid the bottom 25% of humans are... if ChatGPT is smarter than 90% of people on 90% of things, surely we can grant that it has intelligence even if it is not AGI.
That's a good comparison, but the essential question is - will these models ever be so good that will be able to do that? If they are at some point then the question whether the model is intelligent, becomes irrelevant (Edsgar Dijkstra made the comparison that the question of whether machines could think was “about as relevant as the question of whether Submarines Can Swim"). However I still have some doubts.
reply
I think they can already do this sort of reasoning to a significant extent... at least for simple stuff.
reply