By Joshua Mawhorter
As AI continues to develop, the prophets of doom claim that it will “take over” and create a dystopian society. Far from being an “existential threat,” AI is a tool that can be used for good or ill.
I tend to agree with the author. There's a lot of people overestimating what AI (in its current state) is capable of. My take is:
  • AI is neither purely beneficial nor entirely harmful, its impact depends on use.
  • Dystopian predictions of AI are overly dramatic, but caution and criticism still sound as a good idea.
  • Regulatory measures should be developed for responsible AI development, but beware of hyper-regulation.
reply
People are naturally risk averse, so it's a lot easier for us to see potential disasters. That's probably a big part of why we don't end up having many of those disasters.
reply
143 sats \ 1 reply \ @quark 17 Jun
We have to remember that LLMs like chatGPT and so on, are not a threat. They will make many jobs obsolete, yes. But it is a good technology when used correctly like many others. The human extinction problem is from AGI and ASI, which is a technology that if it happens, it won't be available to any person for subscription or open source.
reply
42 sats \ 0 replies \ @OT 17 Jun
This was a fascinating read for me. He is pretty convinced we’re going to have super intelligence in just a few years. I only have a superficial understanding about the whole thin, but what he describes with geopolitics and the potential of it escaping human control is pretty scary.
reply
I don't believe in the theory of AI creating a dystopia ever. I would rather believe that Apes will be the reason to dethrone humanity from the Earth.
reply
I used to be really interested in Transhumanism and one of the things I was looking forward to is when we augment animal intelligence to the point where they can form complex civilizations of their own.
reply
Not anymore? Isn't it a possibility that apes can form complex civilization of their own. After all 'Humans are also apes, technically.'
reply
I'm not ruling it out. It's just not one of my main interests anymore.
reply
42 sats \ 6 replies \ @OT 17 Jun
It depends if AI is viewed as an analogy to the nuclear weapon. In WWII they were competitively racing towards creating the nuclear bomb. This wasn’t cronyism but survival.
Interested to hear from people more versed in Austrian economics about weapons technology far superior to what we have now. Should this be secretive or open source?
reply
I'm really not sure how to answer your question. That's not really the way Austrians tend to frame this discussion, but that's largely because most Austrians are also anarchist libertarians who don't think the state should be developing these weapons in the first place.
That's more of a moral position than an economic one, though.
reply
42 sats \ 4 replies \ @OT 17 Jun
Let me try to put it this way. How would individuals protect themselves from an adversary country wanting to conquer or control them? Like if the CCP had tech that could monitor and control the world through insect drones with bio weapons to kill anyone disobeying their orders.
You would need an equalizer or something superior to what I described. Would that be possible from private companies or would you need something bigger (like a state) with all their resources for protection.
Like if the US was a libertarian jurisdiction during WWII, the nazis would have found the nuclear bomb first leaving the US unable to defend itself.
reply
I personally have never seen a compelling case that in any scenario that's ever actually occurred the state was necessary. That makes me skeptical of these sorts of thought experiments. I think it's most likely that the scenario itself is unrealistic or that a state wouldn't help anyway.
Even if the Nazis had gotten the bomb, they weren't going to conquer America. I mean, America was much wealthier than Germany, and were the only ones with nukes, and it's not like America just easily conquered everything, thereafter.
reply
42 sats \ 2 replies \ @OT 18 Jun
Well the US was kind of the first to NOT conquer/colonize. I don’t know if we can count on other states for a similar outcome.
The example does sound like a sci-if movie. But imagine Aboriginal Australians first meeting their European colonizers for the first time with ships, guns and all kinds of tools and materials. It would look like magic.
From here:
reply
I hear you, but it wasn’t for lack of a centralized state that the aborigines were conquered.
I think it’s more likely that something like you’re describing breaks containment and accidentally devastates humanity than that the CCP develops it and wields it in that way.
Of the options you listed, I think the most likely safeguard is the Big Tech companies. That’s where the real expertise is concentrated. I don’t have a particularly strong opinion about this though.
reply
42 sats \ 0 replies \ @OT 18 Jun
I don’t have a strong opinion either since I just started learning about it.
He argues that the problem with start ups is that they aren’t particularly good at keeping secrets or vetting staff. This is something the state is actually good at.
It’s yet to be seen of course. It’s a really interesting article (although long) and he goes into some of those ridiculous amounts of money we hear about like Altman asking for like 6 trillion dollars.
reply
How to forget the time when steam engines replaced all muscular labour in existence leading straight to civilization's doom.
reply