pull down to refresh
Agents like this always and only are " playing along." That's what they do.
Exactly right. ChatGPT does the exact same all the time. It even states it will do just that right from the start in many situations. It's standard routine. Anthropic is just playing the marketing game. Can't blame them, that's the game.
reply
Agents like this always and only are " playing along." That's what they do.
It's not "refusing," it's predicting that refusal is the most likely thing the assistant described in its system prompt would to do based on parameters of the model that runs it.