We believe that if researchers build superintelligent AI with anything like the field’s current technical understanding or methods, the expected outcome is human extinction.1
I'm struggling with understanding how anyone could look at the current state of ai development and draw the conclusion that anything like superintelligence is coming in the next decade.
Sure, if superintelligence is developed, I can believe we all die. But that's the same for a lot of its, many of which aren't worth thinking about.
I suppose it's useful to get the perspective of people who believe the sky is falling, if for no other reason than to observe how misguided we can be in our evaluation of the facts before us.
I would note that the authors of this paper have spent a good deal of time on the subject. Far more than myself.
Footnotes
-
The most likely way I see AI devastating all of us and leading to our extinction is if we overestimate its capabilities and end up relying on it to do be reasonable when at it's core it still doesn't know what reason is -- who knows? I am but a young girl unskilled in the ways of war. ↩