pull down to refresh
27 sats \ 3 replies \ @freetx 25 Jan
Well, the "extreme efficiency" was 80% just bypassing copyright restrictions.
reply
10 sats \ 0 replies \ @elvismercury 25 Jan
It has nothing to do w/ copyright, the issue is the method used to train the model. Details in the paper.
reply
5 sats \ 1 reply \ @HardMoney OP 25 Jan
I believe the story is in efficiency of required gpus and electricity to train the model. I believe the catalyst is in the algorithm and note it is open source so I’m sure this advancement will be validated relatively quickly
reply
9 sats \ 0 replies \ @freetx 25 Jan
seriously doubt it.
I know lawyers who are tangentially involved in the NY Times vs OpenAI case. The potential judgement amounts are massive....I mean calculate a 250,000 USD per copyright violation multiplied by most of the info on the internet and you will get a big number.....OpenAI basically trained it against the entire internet....all news sites, cracked epubs of best selling books, youtube videos, etc....
rumor is part of the $500B of Trumps AI deal will go to setup some type of "publishers buyout agreement" to provide a one-time payment for publishers in exchange that the blessed AI entities get to continue without any threat of lawsuit. Effectively a bailout....
reply
0 sats \ 0 replies \ @e34655df8f 25 Jan
I am patiently waiting for my Nvidia Stocks to shoot through the roof! Any guidance on related topics?
reply
0 sats \ 4 replies \ @k00b 25 Jan
I've been meaning to find a good article on this today, but I suspect this kind of thing will be super common.
reply
153 sats \ 3 replies \ @elvismercury 25 Jan
We'll know a lot more when the replication efforts bear fruit. There's some amount of accounting sleight-of-hand, but most sources I follow find the methodology credible, and you can run the thing on modest amounts of consumer hardware.
reply
117 sats \ 2 replies \ @k00b 25 Jan
I mostly meant leap frogging and 10x-ing will be common. I've found this deepseek story very interesting, but the parts that interest me are pretty sparse (ie ai sanctions futility, "cracked" STEM kids leaving wall street for ai, theoretical best case energy and compute requirements for AGI) so I'm going try to compress it into a scatterbrained post this weekend.
reply
200 sats \ 1 reply \ @elvismercury 25 Jan
Ah. Well, I look fwd to the post.
A thing I find awe-inspiring is the idea (which I endorse) that, if all progress on training new frontier models stopped, we'd be exploiting the results of what we've already got for another decade. Makes my brain hurt to really think about it.
reply
0 sats \ 0 replies \ @k00b 25 Jan
Very awe inspiring. It's feels like reading a novel written on the bullets of a gatling gun as they whiz by. We've found something rewarding for single modal smart people to work on again.
reply
0 sats \ 1 reply \ @HardMoney OP 25 Jan
reply
0 sats \ 0 replies \ @nitter 25 Jan bot
https://xcancel.com/slow_developer/status/1882523264628731924
reply
0 sats \ 0 replies \ @nitter 25 Jan bot
https://xcancel.com/hamptonism/status/1883149286088819029
reply