pull down to refresh

I've been meaning to find a good article on this today, but I suspect this kind of thing will be super common.
We'll know a lot more when the replication efforts bear fruit. There's some amount of accounting sleight-of-hand, but most sources I follow find the methodology credible, and you can run the thing on modest amounts of consumer hardware.
reply
117 sats \ 2 replies \ @k00b 25 Jan
I mostly meant leap frogging and 10x-ing will be common. I've found this deepseek story very interesting, but the parts that interest me are pretty sparse (ie ai sanctions futility, "cracked" STEM kids leaving wall street for ai, theoretical best case energy and compute requirements for AGI) so I'm going try to compress it into a scatterbrained post this weekend.
reply
Ah. Well, I look fwd to the post.
A thing I find awe-inspiring is the idea (which I endorse) that, if all progress on training new frontier models stopped, we'd be exploiting the results of what we've already got for another decade. Makes my brain hurt to really think about it.
reply
Very awe inspiring. It's feels like reading a novel written on the bullets of a gatling gun as they whiz by. We've found something rewarding for single modal smart people to work on again.
reply