It sounds like what's going on is that the base versions are encoded with a bunch of safeguards to ensure that it doesn't say crazy things, but when you fine-tune the model some of those safeguards may go away
It's kinda hard to say since they don't tell us exactly what they fine-tuned the model with.
sister-raping worldcoin scammersAI execs have done to the publics mind. The AI industry promotes these ridiculous scare tactics in order to make their creation seem "so important that its dangerous". But the only danger is attributing intent where there is none.