I tend to skim the surface of the AI news world, which is probably why most of what I end up seeing is one or another AI pontificator talking about how a particular LLM can't say this or can say that as if it's some kind of gotcha: aha! We caught the evil mad scientists behind the screen with their agenda hanging out!
In this case, people apparently got Grok to say lots of pro-Hitler things. Last year it was exciting and astounding that LLMs were giving people "diversity" pictures of historical events. Remember black George Washington crossing the Delaware? Cue the soyjak pointing in outrage at fill-in-your-blank.
These outrages seem completely unimportant to me.
The real problem at hand is how much trust people place in the answers they receive when working with an LLM. Maybe the best outcome is that we all get seeded with a very strong distrust of LLM outputs -- at least enough trust to check our answers once in a while.