pull down to refresh
102 sats \ 1 reply \ @SimpleStacker 19h \ on: The Problem — Eliezer Yudkowsky et al AI
This all seems hopelessly speculative and poorly defined to me. I'm sure that they have spent a lot of time thinking about it, but I just don't think today's intelligentsia are up to the task, especially in the realm of moral philosophy. This is especially clear when you see modern ethicists who can't say that infanticide is wrong, or who think it's morally acceptable to spread vaccination without consent via undetectable vectors like mosquitos in order to circumvent vaccine skepticism.
To be fair, I haven't read their original paper, but the syllogistic points brought up in the article seem extremely loose. "AI is very likely to pursue wrong goals." -- Why so confident that this is true? -- Even if it is true, why would their goals necessarily correlate with human extinction? Humans pursue wrong goals, up to and including humans with access to the nuclear launch codes. Is AI surely more dangerous?
To me, the best explanation for AI doomerism is that AI is the topic-du-jour. Demand for hot takes about AI is leading people to fill out that supply. Accuracy in prediction is secondary to satisfying demand for hot takes.
That's my hot take for the day.
I think hot take is spot on. The people who are worked up about AI doom were previously worked up about COVID or the climate or terrorism or violent video games or teenagers having sex.
One wonders though what should be the appropriate response when actually faced with an existential threat?
reply