AbstractAbstract
Generative AI chatbots like OpenAI's ChatGPT and Google's Gemini routinely make things up. They "hallucinate" historical events and figures, legal cases, academic papers, non-existent tech products and features, biographies, and news articles. Recently, some have argued that these hallucinations are better understood as bullshit. Chatbots produce streams of text that look truth-apt without concern for the truthfulness of what this text says. But can they also gossip? We argue that they can. After some definitions and scene-setting, we focus on a recent example to clarify what AI gossip looks like before considering some distinct harms — what we call "technosocial harms" — that follow from it.IntroductionIntroduction
Generative AI chatbots like OpenAI’s ChatGPT and Google’s Gemini routinely make things up. They fabricate—or “hallucinate”, to use the technical term—historical events and figures, legal cases, academic papers, non-existent tech products and features, biographies, and news articles (Edwards, 2023). They’ve even suggested that eating mucus and rocks can lead to better health and encouraged users to put glue on their pizza to keep the cheese from slipping off (McMahon & Kleinman, 2024; Piltch, 2024).
Recently, some have argued that although chatbots often generate false information, they don’t lie. As mere text-generating predictive engines, they are not—and cannot be—concerned with truth; chatbots are not agents with experiences and intentions and therefore cannot misrepresent the world they see, which is what “hallucinate” implies. Instead, they bullshit, in the Frankfurtian sense (Frankfurt, 2005). They produce streams of text that look truth-apt without any concern for the truthfulness of what this text says (Bergstrom & Ogbunu, 2023; Fisher, 2024; Hicks et al., 2024; Slater et al., 2024).
Chatbot bullshit can be deceptive—and seductive. Because chatbots sound authoritative when we interact with them—their dataset exceeds what any single person can know, and their bullshit is often presented alongside factual information we know is true—it’s easy to take their outputs at face value. Doing so, however, can lead to epistemic harm. For example, unsuspecting users might develop false beliefs that lead to dangerous behaviour (e.g., eating rocks for health), or, they might develop biases based upon bullshit stereotypes or discriminatory information propagated by these chatbots (Buolamwini, 2023; Birhane, 2021, 2022; Obermeyer et al., 2019).
We argue that chatbots don’t simply bullshit. They also gossip, both to human users and to other chatbots. Of course, chatbots don’t gossip exactly like humans do. They’re not conscious, meaning-making agents in the world and, therefore, they lack the motives and emotional investment that typically animate human gossip. Nevertheless, we’ll argue that some of the misinformation chatbots produce is a kind of bullshit that’s better understood as gossip. And we’ll argue further that this distinction is more than simply a conceptual debate. Chatbot gossip can lead to kinds of harm—what we call technosocial harms—potentially wider in scope and different in character than some of the epistemic harms that follow from (mere) chatbot bullshit. After some initial definitions and scene-setting, we focus on a recent example to clarify what AI gossip looks like before considering some technosocial harms that flow from it.
...read more at link.springer.com
pull down to refresh
related posts