A critical response to Thomas Friedman's article "The One Danger That Should Unite the U.S. and China." Does a nice job of speaking plainly about what AI is and what it is not.
Friedman's perspectives on current AI are infused by a fair amount of magical thinking about how these systems work and what they can do. Here are a few examples.
Friedman notes that, for AI systems that included training text in different languages, “the AI could translate between the languages it had been trained on—without anyone ever programming it to do so.” Even more shocking: “[t]he systems started speaking foreign languages they hadn’t been taught. So, it’s just a sign that we have to be really humble about how much these systems know and how these systems work.” Further, as evidence for the claim that AI systems are somehow gaining their own “agency,” Friedman summarizes a news article from Bloomberg:
Mitchell manages to dismiss most of these claims without too much trouble. Yet she still points out that
International cooperation on AI safety is unquestionably something we should all strive for. The U.S. and China need to find ways to regulate AI to avoid its current and likely future harms, such as deep fakes used for fraud and manipulation, AI bias in decisions, misinformation, surveillance, loss of privacy, and so on.
But Mitchell makes a strong stand that superintelligence is not one of the problems on her list, and most of what we recognize as intelligence in AI is actually the artifact of what they are trained on:
To badly paraphrase the horror movie Soylent Green: “AI is people!” The vast corpus of human writing these systems are trained on is the basis for everything modern AI can do; no magical “emergence” need be invoked.