Literal "hallucinations" were the result.After seeking advice on health topics from ChatGPT, a 60-year-old man who had a "history of studying nutrition in college" decided to try a health experiment: He would eliminate all chlorine from his diet, which for him meant eliminating even table salt (sodium chloride). His ChatGPT conversations led him to believe that he could replace his sodium chloride with sodium bromide, which he obtained over the Internet.Three months later, the man showed up at his local emergency room. His neighbor, he said, was trying to poison him. Though extremely thirsty, the man was paranoid about accepting the water that the hospital offered him, telling doctors that he had begun distilling his own water at home and that he was on an extremely restrictive vegetarian diet. He did not mention the sodium bromide or the ChatGPT discussions.His distress, coupled with the odd behavior, led the doctors to run a broad set of lab tests, revealing multiple micronutrient deficiencies, especially in key vitamins. But the bigger problem was that the man appeared to be suffering from a serious case of "bromism." That is, an excess amount of the element bromine had built up in his body.
pull down to refresh
related posts
63 sats \ 8 replies \ @optimism 7 Aug
Good thing that you're responsible for the output of ChatGPT too!
reply
222 sats \ 7 replies \ @0xbitcoiner OP 7 Aug
Show me where it says that. Ahahah
reply
63 sats \ 6 replies \ @optimism 7 Aug
https://openai.com/policies/row-terms-of-use/
- Input and Output are collectively Content
- You are responsible for Content
reply
111 sats \ 5 replies \ @0xbitcoiner OP 7 Aug
bastards
reply
42 sats \ 4 replies \ @optimism 7 Aug
You even have to ensure that the response does not violate any laws, so add "do not violate any applicable laws" in your prompt, so that you cover that base. This is good because then you will spend 200M tokens on it reading all applicable laws.
reply
100 sats \ 3 replies \ @SimpleStacker 7 Aug
Whoa. Couldn't they entrap people by feeding them illegal output?
WHOA. The federal government is partnering with OpenAI.
F***
reply
0 sats \ 2 replies \ @optimism 7 Aug
Probably, but
- They won't need to entrap people because it fucks you up proper without intervention, per OP.
- Is it entrapment if you agreed to the ToS this is spelled out in? I'm not a lawyer so idk what I'm talking about but it looks dodgy from where I'm sitting due to that definition.
reply
100 sats \ 1 reply \ @SimpleStacker 7 Aug
My paranoid scenario is that the CIA/FBI instructs OpenAI to send illegal output to a targeted user, they then use that illegal output as a pretext to further investigate and prosecute the target. The user would have no way of knowing that the output generated was customized to target them, and the TOS says they are responsibility for the legality of the outputs.
view all 1 replies
21 sats \ 0 replies \ @perscrutador 8 Aug
Stupid people don't need AI to fuck themselves. It will help govs do that because they are lazy.
reply