pull down to refresh
113 sats \ 2 replies \ @optimism 2h \ on: The Rise of Parasitic AI - Adele Lopez AI
I've had some minimal training (10 sessions) of dealing with people that have psychosis and I've been applying that to the cases much like what is described in here, when I get a message or email (for example on a security mailing list) from what seems to be someone completely delusional with their AI companion.
But I actually don't know if that's the right approach: never confirm or deny the grandeur of the bot, ask questions, correct factual mistakes.
Thus far it has always resulted in people giving up, but I don't know how it goes with them. Did they drop the bot? Or did they move on to something that will confirm their delusion? I don't know if I should reach out and check in on people, or let it be, for I played a role in their delusion and I wouldn't want them to regress. This is very, very hard.
I don't have any formal training in this, but I spent a three years working at a drop-in center for people who were chronically homeless (this was almost always synonymous with some form of psychosis).
The lesson I learned from those years was that I was most helpful to people when I realized that I didn't play any different role in what was going on with them than the chairs on which we sat or the steps into the building. Whatever helpfulness I provided, occurred when I didn't allow myself to feel personally responsible for their psychosis.