Some game theory played out as to where we may be headed in a global arms race for AGI.
pull down to refresh
128 sats \ 2 replies \ @freetx 18h
I don't think the risk of achieving AGI is that great - I'm think I agree with Roger Penrose's theorem, which is that consciousness is non-computeable.
There is however a real risk that dumb humans will start worshiping their simulacrum.
I'm reminded by the section of Isiah where he is making fun of dumb humans worshiping their blocks of wood...its actually a pretty funny section of the bible, real sarcastic tone.
He talks about how some guy walks outside and cuts down a tree, uses most of it to build a fire to eat his dinner...then looks at the remaining wood:
reply
36 sats \ 1 reply \ @optimism 17h
Agreed. Also I think the scenario is seeing some world where governments are in total control unless they delegate explicitly to corporations or AI, which, at least in my personal experience of the world, isn't necessarily true.
Example: if governments had totalitarian control, then there wouldn't be libgen, llama would not have been able to train on libgen, nor would deepseek or qwen, nor would there be great open models, nor would there be the explosion of people training their own on hugging face. Thus, since most of modern, accessible, affordable AI exists because of government non-regulation (hugging face isn't regulated right now), or its inability to enforce regulation (in the case of libgen), the most recurring theme in the doom scenario, government choice, is probably a bad assertion.
In the last 4 months I have automated more things that I couldn't do before thanks to these open models. I don't think there's a limit to what is achievable in terms of automation (but not awareness, it's all instruct and semi-patterned execution at the moment) but i think it's not a 10x but a 100x in performance if I would focus on it and approach it fully. Now, I don't want to harm people, but others like me may be less tolerant, or even be emboldened by their beliefs of sentience in a forward autocorrect prediction algo.
Aren't we seeing that already happening, though infrequently, right now, even while most people are only exposed to LLM / multi-modal generators and not even anything that is advanced?
reply
210 sats \ 0 replies \ @freetx 16h
Agreed. As I said elsewhere. AI is sort of a 2nd Amendment issue.
That is the publics "private" AI will counter-balance the centralized AI from Gov / BigCorp.
Even though our individual AI agents may be 1000x less powerful than theirs, our numbers are greater. No different than how a group of Vietnamese peasants armed only with AK-47s and sandals can win a war against fighter jets, bombers, tanks, etc.
reply