pull down to refresh

Well, the "extreme efficiency" was 80% just bypassing copyright restrictions.
reply
It has nothing to do w/ copyright, the issue is the method used to train the model. Details in the paper.
reply
I believe the story is in efficiency of required gpus and electricity to train the model. I believe the catalyst is in the algorithm and note it is open source so I’m sure this advancement will be validated relatively quickly
reply
the story is in efficiency of required gpus and electricity to train the model.
seriously doubt it.
I know lawyers who are tangentially involved in the NY Times vs OpenAI case. The potential judgement amounts are massive....I mean calculate a 250,000 USD per copyright violation multiplied by most of the info on the internet and you will get a big number.....OpenAI basically trained it against the entire internet....all news sites, cracked epubs of best selling books, youtube videos, etc....
rumor is part of the $500B of Trumps AI deal will go to setup some type of "publishers buyout agreement" to provide a one-time payment for publishers in exchange that the blessed AI entities get to continue without any threat of lawsuit. Effectively a bailout....
reply
I am patiently waiting for my Nvidia Stocks to shoot through the roof! Any guidance on related topics?
reply
I've been meaning to find a good article on this today, but I suspect this kind of thing will be super common.
reply
We'll know a lot more when the replication efforts bear fruit. There's some amount of accounting sleight-of-hand, but most sources I follow find the methodology credible, and you can run the thing on modest amounts of consumer hardware.
reply
117 sats \ 2 replies \ @k00b 25 Jan
I mostly meant leap frogging and 10x-ing will be common. I've found this deepseek story very interesting, but the parts that interest me are pretty sparse (ie ai sanctions futility, "cracked" STEM kids leaving wall street for ai, theoretical best case energy and compute requirements for AGI) so I'm going try to compress it into a scatterbrained post this weekend.
reply
Ah. Well, I look fwd to the post.
A thing I find awe-inspiring is the idea (which I endorse) that, if all progress on training new frontier models stopped, we'd be exploiting the results of what we've already got for another decade. Makes my brain hurt to really think about it.
reply
Very awe inspiring. It's feels like reading a novel written on the bullets of a gatling gun as they whiz by. We've found something rewarding for single modal smart people to work on again.
reply