pull down to refresh
Yeah, that makes sense. It would be tricky to implement though, because there could be perfectly innocent conversations in which you want to talk to the bot as if it were a real person.
I was remarking on the memory settings simply because that seems like such a small, innocuous detail, but likely has a huge effect on the types of responses, especially over a long period of time.
reply
Yeah, there is no "simple solution".
One thing is that GPT-5 famously (initially at least) reduced its "chitty-chatty" nature and gave more direct cut and dry replies......users rebelled and twitter was filled with howls of "they've killed the soul of GPT!".
However, (a) I think it was a good thing, and (b) I think it probably came from health & safety people within OpenAI who realized that they must de-personalize a bit to limit damage to people who have a tenuous grip on reality.
It may well be in 50 years we look back at: "friendly cutesy AI interactions" as generally dangerous and not a best practice.
reply
llama-guard
andgranite-guard
that can rate statements on a threat matrix like "pornography, violence, theft, etc". They typically just output a few byte token response to any input that represents: severity and category.