This article is a nice string of the boy calling out the emperor's new clothes:
If a chunk of the financial backbone for these companies is a supportive and helpful and friendly and romantic chat window, then it helps the companies out like hell if there’s a widespread belief that the thing chatting with you through that window is possibly conscious.
Yes! The AI personification and consciousness stuff makes AI companies money! It makes it easier for them to be relevant and important and it gets everyone's attention. Of course they are going to jawbone about it!
Can’t we just be very impressed that AIs can have intelligent conversations, and ascribe them consciousness based on that alone?No.
Consciousness is really, really hard to define and test, it's almost a wild goose chase. But you don't have to abandon common sense and assume anything that talks is conscious.
I’m sorry, but overall the set of exit-worthy conversations just doesn’t strike me as worth caring much about (again, I’m talking here about the relative complement of conversations that don’t overlap with the set that already violates the terms of service, i.e., the truly bad stuff). Yes, some are boring. Or annoying. Or gross. Or even disturbing or distressing. Sure. But many aren’t even that! It looks to me that often an LLM chooses to end the conversation because… it’s an LLM! It doesn’t always have great reasons for doing things! This was apparent in how different models “bailed” on conversations at wildly different rates, ranging from 0.06% to 7% (and that’s calculated conservatively).
Healthy breath of fresh air.
!!!!!