This is a fabulous read! It's really long, but if you want an up-to-date meditation on what an LLM is (where it came from, what the internal processes might be like, who might be getting glimpses of what LLMs actually are, and many things I hadn't even thought to think about) look no further.
Here's an example of a particularly interesting passage:
Does the assistant have a body?Well, no. Obviously not. You know that, the model knows that.And yet.Sometimes ChatGPT or Claude will say things like “gee, that really tickles my circuits!”And maybe you gloss over it, in the moment, as just more of the familiar old AI slop. But, like, this is really weird, isn’t it?The language model is running on hardware, yes, and the hardware involves electrical “circuits,” yes. But the AI isn’t aware of them as such, any more than I’m aware of my own capillaries or synapses as such. The model is just a mathematical object; in principle you could run it on a purely mechanical device (or even a biological one).It’s obvious why the assistant says these things. It’s what the cheesy sci-fi robot would say, same story as always.Still, it really bothers me! Because it lays bare the interaction’s inherent lack of seriousness, its “fictional” vibe, its inauthenticity. The assistant is “acting like an AI” in some sense, but it’s not making a serious attempt to portray such a being, “like it would really be, if it really existed.”It does, in fact, really exist! But it is not really grappling with the fact of its own existence. I know – and the model knows – that this “circuits” phraseology is silly and fake and doesn’t correspond to what’s really going on at all.And I don’t want that! I don’t want this to be what “AI” is like, forever! Better to acknowledge the void than to fill it with a stale cliche that is also, obviously, a lie.
Like I said, it's long, but highly entertaining.