It's not the LLMs, it's the people who are deciding how to train them.
neither study has demonstrated that AI models are lying over their own volition, but instead doing so because they've either been trained or jailbroken to do so
It doesn't really make a difference. I would suggest you watch Robert Miles's videos on YouTube on the topic of AI safety. He explicitly explains why it is very difficult/impossible to teach an AI not to lie.
reply