These criticisms of AI critics aren't particularly technical, but I found them to be an interesting check on my blanket belief that llms don't do much more than guess what comes next.
even if we grant their claim that LLM's don't reason like people do and don't have an "embodied understanding of the world", it doesn't follow that this is some really fundamental limitation on their capabilities.
An "aggregation" of millions of minds. It's interesting to see what happens when we reinforce the quality output (ourselves - this may be subjective) and remove the slop; would it become less sloppy?
Rarely do I find the result of one of my queries to be slop. Frequently do I find the outputs posted by other people to be slop. This probably has nothing to do with the results, and everything that I'm looking for when reading.
When I query, most of the time I'm looking for something in the output, not the framing of the output -- which is often the slop part.
I doubt the thing that makes outputs feel like slop can really be reinforced away.
I always tell chatbots to reply in prose and to absolutely not use bullet points because I've grown allergic to these.
Ah yes! Thanks!
Right we sometimes forget that LLMs are a different kind of intelligence they shouldnt always be compared to humans.
Shallit is perhaps responding to statements like this