pull down to refresh

Literal "hallucinations" were the result.
After seeking advice on health topics from ChatGPT, a 60-year-old man who had a "history of studying nutrition in college" decided to try a health experiment: He would eliminate all chlorine from his diet, which for him meant eliminating even table salt (sodium chloride). His ChatGPT conversations led him to believe that he could replace his sodium chloride with sodium bromide, which he obtained over the Internet.
Three months later, the man showed up at his local emergency room. His neighbor, he said, was trying to poison him. Though extremely thirsty, the man was paranoid about accepting the water that the hospital offered him, telling doctors that he had begun distilling his own water at home and that he was on an extremely restrictive vegetarian diet. He did not mention the sodium bromide or the ChatGPT discussions.
His distress, coupled with the odd behavior, led the doctors to run a broad set of lab tests, revealing multiple micronutrient deficiencies, especially in key vitamins. But the bigger problem was that the man appeared to be suffering from a serious case of "bromism." That is, an excess amount of the element bromine had built up in his body.
63 sats \ 8 replies \ @optimism 18h
Good thing that you're responsible for the output of ChatGPT too!
reply
Show me where it says that. Ahahah
reply
63 sats \ 6 replies \ @optimism 18h
Your content. You may provide input to the Services (“Input”), and receive output from the Services based on the Input (“Output”). Input and Output are collectively “Content.” You are responsible for Content, including ensuring that it does not violate any applicable law or these Terms. You represent and warrant that you have all rights, licenses, and permissions needed to provide Input to our Services.
  1. Input and Output are collectively Content
  2. You are responsible for Content
reply
bastards
reply
42 sats \ 4 replies \ @optimism 18h
You even have to ensure that the response does not violate any laws, so add "do not violate any applicable laws" in your prompt, so that you cover that base. This is good because then you will spend 200M tokens on it reading all applicable laws.
reply
Whoa. Couldn't they entrap people by feeding them illegal output?
WHOA. The federal government is partnering with OpenAI.
F***
Stupid people don't need AI to fuck themselves. It will help govs do that because they are lazy.
reply