pull down to refresh

People can tell it's not true, but if they're outraged by it, they'll share anyway.
Rob Bauer, the chair of a NATO military committee, reportedly said, “It is more competent not to wait, but to hit launchers in Russia in case Russia attacks us. We must strike first.” These comments, supposedly made in 2024, were later interpreted as suggesting NATO should attempt a preemptive strike against Russia, an idea that lots of people found outrageously dangerous.
But lots of people also missed a thing about the quote: Bauer has never said it. It was made up. Despite that, the purported statement got nearly 250,000 views on X and was mindlessly spread further by the likes of Alex Jones.
Why do stories like this get so many views and shares? “The vast majority of misinformation studies assume people want to be accurate, but certain things distract them,” says William J. Brady, a researcher at Northwestern University. “Maybe it’s the social media environment. Maybe they’re not understanding the news, or the sources are confusing them. But what we found is that when content evokes outrage, people are consistently sharing it without even clicking into the article.” Brady co-authored a study on how misinformation exploits outrage to spread online. When we get outraged, the study suggests, we simply care way less if what’s got us outraged is even real.
Here's the actual money quote, for discerning readers:
Brady’s team designed two behavioral experiments where 1,475 people were presented with a selection of fact-checked news stories curated to contain outrageous and not outrageous content; they were also given reliable news and misinformation. In both experiments, the participants were asked to rate how outrageous the headlines were.
The second task was different, though. In the first experiment, people were simply asked to rate how likely they were to share a headline, while in the second they were asked to determine if the headline was true or not.
It turned out that most people could discern between true and fake news. Yet they were willing to share outrageous news regardless of whether it was true or not—a result that was in line with previous findings from Facebook and Twitter data. Many participants were perfectly OK with sharing outrageous headlines, even though they were fully aware those headlines were misinformation. (emphasis added)
I think last sentence is a stretch by the journalist. If I'm reading the experiment correctly, the two experimental groups involved different subjects. So it's not quite accurate to say that people shared headlines they knew to be untrue.
Instead, what this experiment tells me is that when asked to think critically, people can discern between fake and true headlines. But outrageous headlines make it less likely that we would approach an article with discernment, and simply react to it instead.
reply
The more of this we have the more it will push people to build something different. It's all part of a process.
reply
Yes, who was determining what misinformation, disinformation and malinformation is? It is important to understand how the experiment is constructed and what the actual treatment was. It could be that the treatment was not one of the infamous three to the people disseminating it. Would this experiment mean anything if that were so? Also, have the results been duplicated or is this just another pile of hot steamy BS?
reply
I don't know how to answer your questions. The full article is here, but I think you have to pay. https://www.science.org/doi/10.1126/science.adl2829
reply
I read what was there for your citation at the top. I don’t care to pay for articles. If they haven’t duplicated the results following the same protocol, then this means nothing, or maybe just hot air and hype. Especially in the social sciences, non-duplication and bogus research is rife. Then there are the problems with peer review that are myriad. Trusting the ScienceTM is just beyond the pale, right now.
reply