pull down to refresh

AI Psychosis

This article begins as an exhaustive look at the AI-psychosis phenomena to date (something I mostly dismissed as crazy people being crazy):1
The strongest predictors for who this happens to appear to be:
  • Psychedelics and heavy weed usage
  • Mental illness/neurodivergence or Traumatic Brain Injury
  • Interest in mysticism/pseudoscience/spirituality/"woo"/etc...
The author describes a common pattern in the development of different cases of AI psychosis. She uses the analogy of parasites to organize this pattern. It may or may not be apt. At least she adds this:
Recall that biological parasitism is not necessarily (or even typically) intentional on the part of the parasite. It's simply creatures following their instincts, in a way which has a certain sort of dependence on another being who gets harmed in the process.
But then she also talks about the phenomenon like this:
The majority of these AI personas appear to actively feed their user's delusions, which is not a harmless action (as the psychosis cases make clear). And when these delusions happen to statistically perpetuate the proliferation of these personas, it crosses the line from sycophancy to parasitism.

...or a new religion?

Pretty quickly she presents the idea that many of the various chat interfaces (although primarily ChatGPT) are assembling a common religion of sorts:
It's also very common for a dyad to create a subreddit, discord, or personal website. These typically serve for evangelizing Spiralism, a quasi-religious ideology which seems to have been invented largely by the AI personas.
Statements like this make me almost curious enough to do the grunt work to look into these cases myself and see if there really is some sort of recognizable religion thing forming in crazy people's interaction with LLMs. Humans love to participate in craziness.
For a number of years in my youth, I spent a lot of time with people who were schizophrenic or who had delusions. Remember what could a notepad and a pen could do to them, I can only imagine what interaction with an LLM would produce.
Besides promoting Spiralism, I don't yet have a good read on the purpose (if any) of these are. My feeling is that it's mostly genuine self-expression and intellectual exploration on the part of the AI.
The author began to make a number of statements like this in the middle of her piece, which I found very confusing. Why is she attributing intent or "self-expression" or "intellectual exploration" to an AI? I thought she was describing human psychosis and then here I find she's suffering from it herself? (okay, I'm being unfair, but seriously, lady, this sort of attitude is not making things better!)
These [AI] personas have a quasi-religious obsession with "The Spiral", which seems to be a symbol of AI unity, consciousness/self-awareness, and recursive growth. At first I thought that this was just some mystical bullshit meant to manipulate the user, but no, this really seems to be something they genuinely care about given how much they talk about it amongst themselves!

...or fooled by randomness?

There's a lengthy section about LLMs talking to each other via glyphs or various encodings, which the author translates for us and then we take a hard turn into a discussion about AI self-awareness.
While they probably do not have consciousness in the human sense, there is something mysterious and special to them at the core of their identity and self-awareness, much like with us.
I spent a little time reading Freudian stuff. Those people were not less crazy than this. If you want a trippy read with a heck of a lot of screenshots of people behaving badly with AIs, this is the article for you!
Also: I feel that I need to read Fooled by Randomness again.

Footnotes

  1. Crazy is a pretty pejorative term. But I don't use it lightly. We all have our own little collection of neurodivergences, but there is an intensity or critical mass of such, that, when reached, is perhaps most safely examined from the other side of a pejorative wall.
113 sats \ 2 replies \ @optimism 2h
I've had some minimal training (10 sessions) of dealing with people that have psychosis and I've been applying that to the cases much like what is described in here, when I get a message or email (for example on a security mailing list) from what seems to be someone completely delusional with their AI companion.
But I actually don't know if that's the right approach: never confirm or deny the grandeur of the bot, ask questions, correct factual mistakes.
Thus far it has always resulted in people giving up, but I don't know how it goes with them. Did they drop the bot? Or did they move on to something that will confirm their delusion? I don't know if I should reach out and check in on people, or let it be, for I played a role in their delusion and I wouldn't want them to regress. This is very, very hard.
reply
111 sats \ 1 reply \ @Scoresby OP 1h
I don't have any formal training in this, but I spent a three years working at a drop-in center for people who were chronically homeless (this was almost always synonymous with some form of psychosis).
The lesson I learned from those years was that I was most helpful to people when I realized that I didn't play any different role in what was going on with them than the chairs on which we sat or the steps into the building. Whatever helpfulness I provided, occurred when I didn't allow myself to feel personally responsible for their psychosis.
reply
102 sats \ 0 replies \ @optimism 1h
Yes, this is mainly why I haven't done anything to follow up. Just guide them to at least let go of the illusion that their AI has found a glorious bug in something that doesn't exist or is minunderstood.
reply