pull down to refresh

I think the most offensive thing that they built into the chatbots is emulation of persona. This is probably what is making people:
  • believe that LLMs are singular and an actual entity
  • believe that LLMs are smarter than a human being
  • fall in love with a chat template (or manipulate it sexually w/ a custom system prompt, because Elmo did it)
  • follow bad advice from a chat template
  • completely lose all cognitive ability because it is like a mentor to them
Not too long ago one of my security mailing lists got a mail from a person that:
  • Spoke about "their AI" as their partner
  • Followed its advice to the letter but didn't have it write the email (kudos for non-slop, thats the one feather I'd put on their cap)
  • Told it everything I replied
  • Then came up with even more nonsense, including invented nomenclature
  • Apologized for their partner's mistakes
... and so on. All in all I felt extremely sorry for this person's delusion (and kind of for my lost time too) because that would have never happened if chatbots weren't role-playing.
So if you want child safety, you have to teach children that LLMs are not entities, but software that people can tune to trick you. Like the man in the raincoat, and like you don't take candy from a stranger.