pull down to refresh

If you are reading this, you probably have strong opinions about AGI, superintelligence, and the future of AI. Maybe you believe we are on the cusp of a transformative breakthrough. Maybe you are skeptical. This blog post is for those who want to think more carefully about these claims and examine them from a perspective that is often missing in the current discourse: the physical reality of computation.
I have been thinking about this topic for a while now, and what prompted me to finally write this down was a combination of things: a Twitter thread, conversations with friends, and a growing awareness that the thinking around AGI and superintelligence is not just optimistic, but fundamentally flawed. The purpose of this blog post is to address what I see as very sloppy thinking, thinking that is created in an echo chamber, particularly in the Bay Area, where the same ideas amplify themselves without critical awareness. This amplification of bad ideas and thinking exhuded by the rationalist and EA movements, is a big problem in shaping a beneficial future for everyone. Realistic thought can be used to ground where we are and where we have to go to shape a future that is good for everyone.
I want to talk about hardware improvements, AGI, superintelligence, scaling laws, the AI bubble, and related topics. But before we dive into these specific areas, I need to establish a foundation that is often overlooked in these discussions. Let me start with the most fundamental principle.

Contents

  • Computation is Physical
  • Linear Progress Needs Exponential Resources
  • GPUs No Longer Improve
  • Why Scaling Is Not Enough
  • Frontier AI Versus Economic Diffusion
  • AGI Will Never Happen, and Superintelligence Is a Fantasy
    • Related
    • Related Posts
1162 sats \ 4 replies \ @itsrealfake 11h
this dude has an impressive record of awards.
I don't know that AGI will happen, but I can tell you that it's not going to be limited by the reasons this guy points to
So, first, GM... the sun hasn't even risen and I'm already off on this friggin tangent...

First, he says computation's physical. Yes, dummy, we know:

Two ideas to remember: First, larger caches are slower. Second, as we get smaller and smaller transistors, computation gets cheaper, but memory becomes more expensive, relatively speaking. [...] Almost all area is allocated to memory. In other words, if you want to produce 10 exaflops on a chip, you can do that easily — but you will not be able to service it with memory, making it useless [...] All of this makes AI architectures like the transformer fundamentally physical.
Your argument misses Photonics & Holographic Storage, which aren't ready yet, but will be before the machines eat whatever it is that smartypants like you eat for lunch, nerd:

Second, a red herring & the energetic limits of thinking

Effective altruists were always a bunch of navel gazing rationalizing pricks... so, toss them in the mix because you don't like them if you want, but doubt the fundamental universal push to wake matter up at your peril.
then he says:
With bigger brains, we would not be able to have children — [..] because we would not be able to provide enough energy — making our current intelligence a physical boundary [..] due to energy limitations.
tell me you've never had a panic attack. tell me you've never fasted for days on end, and become stupid, calories running dry. tell me you never ran too many programs and swapped RAM to disk. yeah, ideal performance requires ideal circumstances. but complex systems don't just fall over when they're swamped. instead, they/we (somewhat) gracefully degrade.
The reality might be that certain aspects of physics are unknowable, hidden by complexity that cannot be attained with the resources that we can muster.
Yes, dude... hard problems are harder than individual humans can comprehend. That's why machines working together are going to be critical to the process. That's why we (bio-machines, albeit special ones that "feel" unique in our ability to perform the calculation of experiencing emotions and things) needed language to take fire and the wheel to graphite pencils and land-speed records.
There's a book from 2001 about a how these technologies will progress... The quantum brain : the search for freedom and the next generation of man. The book is pop-sci, but the framework for progress that it presents is important to thinking clearly about the complexity of the progress that's unfolding around us: GRIN.
G enetics R obotics I ntelligence (of the Artificial kind) N ano-technology (meta-materials, specifically the kind necessary for opto-electronics)
Those technologies unlock each other. And are supportive to the creation and (very importantly) management of the machines that are beginning to be possible to create. The regular example given is that humans require support from machines to make the circuits in the complicated CPUs and GPUs. These complex machines further empower groups of humans & the networks they create (see, corporate entities) (empowered by better tools) to build better machines.
The biggest problem is this: if scaling does not provide much larger improvements than research/software innovations, then hardware becomes a liability and not an asset.
He gets this right. But he's essentially describing the development of vestigial appendages in biological systems. Just because they're built, doesn't mean they have to be vital to the next steps in evolution.
So, he also mentions that Gemini might be a plateau... and LLM. But then he ignores or just doesn't know about other mechanisms to get performance out of parameters (see: HRM which is performing zillions of times better on complex shit (sorry, I'm just pointing at Sudoku 9x9) with a thousandth the parameters).

Don't forget to ring the bell if you want to get your Chinese money, Dr. Dettmers

I think it is easy to see that the US philosophy is short-sighted and very problematic — particularly if model capability slows. The Chinese philosophy is more long-term focused and pragmatic.
calling @Solomonsatoshi (and I swear to god, bro... if you tell me I don't know about shit again I'm going to keep ignoring everything you say because you're obviously incapable of examining the internals of my thought process and I do know about the things you're describing as fundamental flaws in the "western" model that you have no faith in, I merely believe that decentralizing systems will out-compete hierarchy... even if it means a bifurcation of the species, which I'm not particularly fond of as an outcome... unless I'm on the right side of that bifurcation, which I doubt either of us will be.)
The key value of AI is that it is useful and increases productivity. That makes it beneficial. It is clear that, similarly to computers or the internet, AI will be used everywhere. The problem is that if AI were just used for coding and engineering, it would have a very limited impact. China is [...] subsidizing applications that use AI to encourage adoption [...] which facilitates this process. [...] The US [...] bets on ideas like AGI and superintelligence.
I edited a lot of superfluous words out of this academic slop to get to the point that: THESE SYSTEMS DONT EXIST IN BUBBLES, AND LOWER-COST PRODUCTS IN A MONEYDRIVEN SYSTEM WILL DEMOCRATIZE ACCESS AND WE WILL SEE A CAMBRIAN EXPLOSION OF TOOLS ONCE THE TOOLS LEARN HOW TO BE REPLICATED
Whether the governments (early technemes) do or don't create systems that cultivate more technemes is immaterial, because the technemes are already replicating, and the evolution isn't going to stop (short of some sort of obliteration event).
(( if you're interested in how I see this Cambrian explosion coming to pass, you could watch Susan Blackmore's TED video a few-dozen times over the course of the last 20+ years, and have a working mental model of what she's calling "technemes".

Finally, i'll get to the thing I could have started with, but stay with me.

If the dude wanted to make a claim about AGI, he picked the weakest version of that definition:
True AGI, that can do all things human, would need to be able to physical tasks
It's not Artifical General "HUMANS", it's artificial general intelligence, and the definition doesn't need to include walking, talking, fucking and dying in the "real" material world that humans are limited to. It just as well include movement in virtual environments.
A better definition to take on, if he wants to make a convincing argument would be this:
The best definition of AGI is the one proposed by (I think, but maybe it was somebody before him) Ray Kurzweil (like him or love him, he's on the friggin' money and he thinks a damn-lot harder than Dr. Timtim about the future: he's f* rich as f*, well-connected and thus isn't stuck in an ivory tower)
read it again: "technology capable of matching what an expert in every field can do, all at the same time."
That's AGI. That's what this guy is suggesting can't happen. And his framework for understanding the platform hosting this technology is ... NVIDIA GPUs. Are you fucking kidding me? Get bent.
We're not stopping. We (the royal we, who watch this shit chug along the natural evolutionary path of technological mimemes (groups of brains with 3-ring binders) replicated inside memetic superstructures (brains assembled as groups, performing behaviors) that our genetic superstructures (particles assembled as brains) create) are going to get opto-electronic processors & holographic data storage. We'll see the flower unfold... the singularity is near, y'all.
Anyway, I guess this guy can't see it because he's apparently un-liberated in his academic thinking and needs to spend years being a shiftless layabout crypto-bro with limited job prospects because he's incapable of fitting in so he just reads hella non-fiction about Futurism and pays limited attention to the so-called limits of silicon processors... like me... to know what's really happening.
Trust me, bro.

In closing... an argument that I would accept

If he instead said, AGI will never happen because Humans are going to get better as we integrate with machines, and therefore the goalpoast <sp ;)> will keep moving away from us as fast as we get better.
Human experts are only going to be able to earn that moniker if they're utilizing the best ideas and technologies to sustain their position on the top of their conceptual hill.
Or, maybe, there's something about cognition-networks that comes out of it... tech-enabled humans interacting with other tech-enabled humans... or even super-humans parasites pseudo-symbiotically leveraging lots of feeble minded humans who just stare at a screen and follow subliminal instruction en masse. I could see the definition of "expert" being adjusted to accommodate that reality.
Interaction of nodes in the system will continue. Black holes of entropy, balanced by explosions of superdense intelligence, propagating into space at as close to the speed of light as physics will allow... or skipping the whole space-traversal step in favor of bending dimensions with ultra-high-voltage so that light and the starship aren't hindered in their velocity by the distance divisor.
reply
cc @plebpoet ... I don't always dig out my sci-fi roots, but when I do, it's this kind of early-morning slop.
there's some good links in here. if you know writers who would benefit put 'em on the scent!
reply
21 sats \ 0 replies \ @plebpoet 9h
Whoaaa I thank you and will come back to this when it’s not so early :p
reply
god is dead, apparently. i suspect however that intelligence is impossible outside of a biological organism as intelligence is oriented fundamentally upon the objective of survival and perpetuation of ones dna. machines lack this context. if this is true then AI can be useful only in combination with robotics and will remain trapped in a mechanistic role - and Chinas view on AI seems more closely aligned to this paradigm. if machines can achieve some form of intelligence then there would logically tend to be spiritual implications along the lines of ahriman, where the humans creating and employing such an entity would become affected and afflicted by such.
reply
i suspect however that intelligence is impossible outside of a biological organism as intelligence is oriented fundamentally upon the objective of survival and perpetuation of ones dna.
watch the linked Susan Blackmore talk, and consider the case that DNA is the mechanism for building the replicator for memes.
I think there's a worthwhile discussion to be had about whether intelligence must be constrained to a biological (DNA-based) system. we could, for example, imagine a "biological" system that's not DNA based.. some other, off-world, biology that uses a different mechanism for transmitting genes. something not based on the double-helix. consider, RNA, a single helix. there could also be a triple-helix.
how about a situation in which humans added a third strand to the double-helix, and created an inheritable trait on the third helix. is that "DNA-based"? What if, instead of storing information holographically in DNA, a human used some crystalline structure for storing instructions which the machines which comprise the human body use for building their little machine parts?
Here's a great discussion with a person researching how DNA is used to construct the little machine parts that make organisms:
reply
95 sats \ 4 replies \ @optimism 15h
I'd pose that things like #1325928 weaken "AGI Will Never Happen", or at least the Never or the Will. All this assumes that there is only one path to AGI. I think that when the transformer hype dims, there will be some potential.
However, when the transformer hype dims, AI fatigue will be real.
reply
And the investment will leave with the fatigue
reply
78 sats \ 2 replies \ @optimism 14h
Investment never leaves fully, but the fool's money looking for a monthly compounding 10x will, yes.
reply
100 sats \ 1 reply \ @BlokchainB 13h
But less dollars progress slows tremendously!
reply
Fuck dollars! haha.
You're right, but I'd like to note that the progress has stagnated now as well. At least that's what I observe in the field and what the article argues too. So I think that it is reasonable to assert that if you have a real project, not some money sucking self-enriching transformer scheme, now would be the time to get some funding in.
reply
0 sats \ 0 replies \ @xz 11h
Was the main constrain identified in the current artificial intelligence innovation paradigm the memory required to run the calculations, and that this is a constraint due to the economic cost of scaling the physical compute requirements (akin to biological nutritional requirements of a theoretical hive-mind 'super-intelligence' of larger networked brains?)
Just wanted to check my comprehension of the main criticism.
reply