pull down to refresh

[...]

We found that…We found that…

  • Chatbots were generally bad at declining to answer questions they couldn’t answer accurately, offering incorrect or speculative answers instead.
  • Premium chatbots provided more confidently incorrect answers than their free counterparts.
  • Multiple chatbots seemed to bypass Robot Exclusion Protocol preferences.
  • Generative search tools fabricated links and cited syndicated and copied versions of articles.
  • Content licensing deals with news sources provided no guarantee of accurate citation in chatbot responses.


Our findings were consistent with our previous study, proving that our observations are not just a ChatGPT problem, but rather recur across all the prominent generative search tools that we tested.

[...]



[...]



[...]



...read more at cjr.org

cc/ @SimpleStacker #909363