pull down to refresh

We believe that if researchers build superintelligent AI with anything like the field’s current technical understanding or methods, the expected outcome is human extinction.1
I'm struggling with understanding how anyone could look at the current state of ai development and draw the conclusion that anything like superintelligence is coming in the next decade.
Sure, if superintelligence is developed, I can believe we all die. But that's the same for a lot of its, many of which aren't worth thinking about.
I suppose it's useful to get the perspective of people who believe the sky is falling, if for no other reason than to observe how misguided we can be in our evaluation of the facts before us.
I would note that the authors of this paper have spent a good deal of time on the subject. Far more than myself.

Footnotes

  1. The most likely way I see AI devastating all of us and leading to our extinction is if we overestimate its capabilities and end up relying on it to do be reasonable when at it's core it still doesn't know what reason is -- who knows? I am but a young girl unskilled in the ways of war.
122 sats \ 5 replies \ @optimism 17h
Catastrophe can be averted via a sufficiently aggressive policy response.
Deez guys... see #1070367
I think that appeal to authority is the worst possible action one can take, because then the only class able to exterminate humanity will be that authority and you shall have no counter measures. Do you trust these people to actually preserve life? If yes, what if you live in a democracy and next time there will be a flavor of politicians in charge that you didn't vote for - for example because you wouldn't trust them with your life?
reply
Revert to an appeal to authority!?!?!? What a joke! Authority is the entity responsible for most of the wars and harms to innocent people around the world at any time. So, appealing to some one whose best ideas are to either kill you or steal from you is ludicrous. Good luck with that, but please leave me out.
reply
I agree with you, although the counter argument might be our (relative) success with nuclear weapons. It's highly regulated and has the capacity to cause human extinction -- and yet we haven't blown ourselves up yet. We haven't even had a one-off somebody went nuts and detonated a nuke incident. How are we to understand this success?
As far as AI, isn't the greater concern that it's not "nuclear" tech, it's not on the trajectory to becoming capable of wiping out humanity, and therefore regulation will merely limit/distort the benefits we might achieve from using it?
reply
102 sats \ 2 replies \ @optimism 15h
I'm going to be confrontational about the counterargument, apologies if it's offensive, I don't want it to be but it could be.
our (relative) success with nuclear weapons.
That genie wasn't out of the box yet, 80 years and a few days ago, and it was straight used to wipe out 2 cities, by the authorities. So I do not agree that trusting governments with technology, especially not when reflecting on the history of nuclear weapons, to be a proven method for prevention of loss of human lives. It will just be another genocide if we entrust ever-more-opaque governments operating with increasing uncontrolled power, and being increasingly warmongering or straight out waging or sponsoring total war against civilian populations, to do the right thing.
They won't do the right thing. Maybe they will for you, if they're your government and you're of the right race, gender, wealth and circle of friends. But do we truly believe in the benevolence of the current ruling class? Personally, I haven't seen it.
reply
100 sats \ 1 reply \ @Scoresby OP 15h
Yeah, my heart wasn't in the counter argument. I can't help but agree with you. giving governments power over a thing pretty often results in that thing being misused to great harm. What would be the alternative history where nuclear materials are unregulated? (didn't find any part of what you said offensive).
reply
Mutually assured destruction.
reply
This all seems hopelessly speculative and poorly defined to me. I'm sure that they have spent a lot of time thinking about it, but I just don't think today's intelligentsia are up to the task, especially in the realm of moral philosophy. This is especially clear when you see modern ethicists who can't say that infanticide is wrong, or who think it's morally acceptable to spread vaccination without consent via undetectable vectors like mosquitos in order to circumvent vaccine skepticism.
To be fair, I haven't read their original paper, but the syllogistic points brought up in the article seem extremely loose. "AI is very likely to pursue wrong goals." -- Why so confident that this is true? -- Even if it is true, why would their goals necessarily correlate with human extinction? Humans pursue wrong goals, up to and including humans with access to the nuclear launch codes. Is AI surely more dangerous?
To me, the best explanation for AI doomerism is that AI is the topic-du-jour. Demand for hot takes about AI is leading people to fill out that supply. Accuracy in prediction is secondary to satisfying demand for hot takes.
That's my hot take for the day.
reply
I think hot take is spot on. The people who are worked up about AI doom were previously worked up about COVID or the climate or terrorism or violent video games or teenagers having sex.
One wonders though what should be the appropriate response when actually faced with an existential threat?
reply
213 sats \ 1 reply \ @freetx 17h
I'm struggling with understanding how anyone could look at the current state of ai development and draw the conclusion that anything like superintelligence is coming in the next decade.
I half think its all just a perverse marketing strategy. Getting everyone (esp Wallstreet) talking about "should this dangerous new tech be regulated?!?", imbues it with extra-special investing cred.
Sam Altman certainly gives me conman vibes. Thankfully his creations seem increasingly lackluster and long-term OpenAI will probably lose their advantage and become a lackluster offering.
reply
Yes, the marketing vibes are high in such statements. The world is ending - pay attention to meeeee!
It's frustrating though because it clouds the waters of what is actually going on.
reply