Literal "hallucinations" were the result.
After seeking advice on health topics from ChatGPT, a 60-year-old man who had a "history of studying nutrition in college" decided to try a health experiment: He would eliminate all chlorine from his diet, which for him meant eliminating even table salt (sodium chloride). His ChatGPT conversations led him to believe that he could replace his sodium chloride with sodium bromide, which he obtained over the Internet.
Three months later, the man showed up at his local emergency room. His neighbor, he said, was trying to poison him. Though extremely thirsty, the man was paranoid about accepting the water that the hospital offered him, telling doctors that he had begun distilling his own water at home and that he was on an extremely restrictive vegetarian diet. He did not mention the sodium bromide or the ChatGPT discussions.
His distress, coupled with the odd behavior, led the doctors to run a broad set of lab tests, revealing multiple micronutrient deficiencies, especially in key vitamins. But the bigger problem was that the man appeared to be suffering from a serious case of "bromism." That is, an excess amount of the element bromine had built up in his body.
...read more at arstechnica.com
pull down to refresh
related posts
Good thing that you're responsible for the output of ChatGPT too!
Show me where it says that. Ahahah
https://openai.com/policies/row-terms-of-use/
bastards
You even have to ensure that the response does not violate any laws, so add "do not violate any applicable laws" in your prompt, so that you cover that base. This is good because then you will spend 200M tokens on it reading all applicable laws.
Whoa. Couldn't they entrap people by feeding them illegal output?
WHOA. The federal government is partnering with OpenAI.
F***
Probably, but
My paranoid scenario is that the CIA/FBI instructs OpenAI to send illegal output to a targeted user, they then use that illegal output as a pretext to further investigate and prosecute the target. The user would have no way of knowing that the output generated was customized to target them, and the TOS says they are responsibility for the legality of the outputs.
Stupid people don't need AI to fuck themselves. It will help govs do that because they are lazy.