"Perhaps AI could be responsible for a catastrophic spectacle, such as a deadly software update for self-driving cars, or a bad AI-driven decision collapsing a major company, Wooldridge suggests. But his main concern are the glaring safety flaws still present in AI chatbots, despite them being widely deployed. On top of having pitifully weak guardrails and being wildly unpredictable, AI chatbots are designed to affect human-like personas and, to keep users engaged, be sycophantic."
...Very much a click-bait article that is making the rounds this morning. It has nothing in terms of how this researcher arrived at his conclusions. His argument seems to be AI tools are designed to be interactive, therefore they are bad because people are dumbasses and gullible to anything that acts like a person.