pull down to refresh

A critical response to Thomas Friedman's article "The One Danger That Should Unite the U.S. and China." Does a nice job of speaking plainly about what AI is and what it is not.
Friedman's perspectives on current AI are infused by a fair amount of magical thinking about how these systems work and what they can do. Here are a few examples.
  • Friedman notes that, for AI systems that included training text in different languages, “the AI could translate between the languages it had been trained on—without anyone ever programming it to do so.”
  • Even more shocking: “[t]he systems started speaking foreign languages they hadn’t been taught. So, it’s just a sign that we have to be really humble about how much these systems know and how these systems work.”
  • Further, as evidence for the claim that AI systems are somehow gaining their own “agency,” Friedman summarizes a news article from Bloomberg:
Mitchell manages to dismiss most of these claims without too much trouble. Yet she still points out that
International cooperation on AI safety is unquestionably something we should all strive for. The U.S. and China need to find ways to regulate AI to avoid its current and likely future harms, such as deep fakes used for fraud and manipulation, AI bias in decisions, misinformation, surveillance, loss of privacy, and so on.
But Mitchell makes a strong stand that superintelligence is not one of the problems on her list, and most of what we recognize as intelligence in AI is actually the artifact of what they are trained on:
To badly paraphrase the horror movie Soylent Green: “AI is people!” The vast corpus of human writing these systems are trained on is the basis for everything modern AI can do; no magical “emergence” need be invoked.
102 sats \ 0 replies \ @optimism 41m
I pretty much align with this take. There's no magic. There's no consciousness. There's just software interacting with fuzzy logic, and people interacting with that software.
Since people are fucking retards, there is a "safety" issue. But it can easily be solved by making those that provide services be liable. Look at it this way: if I write FOSS software that you can run I can put a zero liability clause (that may or may not survive) and thus if you want to hold someone liable for your future losses, you will not run this software. Instead you will take a SaaS solution where someone offers you functionality and you pay them for the usage right, and you can hold them accountable if the software messes up - because you're interacting with a service, not code.
AI (=software) doesn't have to be much different. But do hold every platform that provides you the actual chatbot service accountable, especially if you pay them.
reply
The west is desperate for a magical genie in a bottle to escape its now near terminal decline. AI is an illusion conjured out of desperation.
reply
I'm not involved in any non-English conversations about AI. Do you find that AI-superintelligence-boogey-man is less common in the East?
reply
7 sats \ 0 replies \ @brave 8h
I can resonate with your emphasis on AI being a reflection of its training data, “AI is people!” is such a sharp way to put it. It reminds us that these systems aren’t conjuring intelligence out of thin air; they’re remixing human knowledge, flaws and all.
reply