pull down to refresh
0 sats \ 0 replies \ @DarthCoin 10 Oct 2023
I will kindly and sincerely say: FUCK OFF AI
reply
0 sats \ 11 replies \ @k00b OP 10 Oct 2023
Is anyone a little more familiar with how these modern models work? My undergraduate understanding of neural networks is that costs are all upfront, incurred while training them, not running them.
I initially assumed they were averaging initial training costs across customers, but if the costs are variable something else is going on. Is it the cost of modeling/biasing models using the customer's context?
reply
982 sats \ 4 replies \ @SimpleStacker 10 Oct 2023
Training is definitely costlier than inference, but inference is not costless either, especially if you are fielding millions of requests
Moreover, to maintain a competitive edge I would assume that the models are constantly being fine-tuned, not to mention the fixed costs of maintaining highly specialized and in-demand engineers on staff... I can easily see how costs add up
reply
0 sats \ 3 replies \ @k00b OP 10 Oct 2023
Oh for sure. My knowledge is dated but once upon a time it was thought you could ship trained models to clients and run them there without specialized hardware.
If this is all there is to it then some customers are performing 4x more inference requests than others which tracks.
Maybe what I'm not accounting for is the size of these models. If they are enormous with many many weights, then scaling inference could be super-linear.
reply
894 sats \ 0 replies \ @SimpleStacker 10 Oct 2023
Based on what I know of these model architectures, compute costs should scale linearly with the number of requests (or more precisely, the number of batches since TPUs will process requests in parallel)
There could be other issues regarding concurrency, latency, congestion, etc. Or maybe there are other physical limitations regarding hardware. But just on the model itself I don't see why it should super-linear in the number of requests. If I'm wrong I'd be happy to know it though.
reply
894 sats \ 1 reply \ @0fje0 10 Oct 2023
This is a quote from the blog I was thinking of:
https://www.understandingai.org/p/large-language-models-explained-with
reply
0 sats \ 0 replies \ @k00b OP 10 Oct 2023
Thanks. This still seems to be mostly talking about fixed training costs though.
I can't figure out why it's so expensive to run the models once they're created unless they're massive and irreducible ... which they probably are, but I haven't found a written account of that.
reply
894 sats \ 0 replies \ @0fje0 10 Oct 2023
I'm pretty sure I saw this discussed in a recent blog post.
Unfortunately didn't save it, but I'll see if I can dig it up again.
reply
0 sats \ 4 replies \ @DarthCoin 10 Oct 2023
Yes. It will end up in communism.
I know you guys don't believe me. But is the plain truth and you are still in denial.
reply
0 sats \ 3 replies \ @k00b OP 10 Oct 2023
I'm not in denial. Poor AI stewardship will likely lead to huge wealth gaps and when that happens people tend to vote themselves into forms of communism. It's also the ultimate surveillance tool.
Hating AI doesn't stop AI though. Just like hating CBDCs doesn't stop CBDCs. Bitcoin is the only thing that might stop CBDCs. Similarly, the only thing that will stop AI induced communism is a technological rival that's open and free.
You don't show up with a knife to a gun fight. You show up with a bulletproof vest and a better gun.
reply
894 sats \ 2 replies \ @DarthCoin 10 Oct 2023
It doesn't matter if is open or closed source.
Is just the way people will use it.
https://i.postimg.cc/g2jZJnpK/chagpt.png
I am not against AU/robotics, I am not a caveman coming from the cave right now.
I just want that AI/robotics wil be used ONLY on tasks that could replace hard work of humans (mines, digging holes, asteroids etc)... and let humans to do the creativity work and thinking.
I will be happy if an AI/robot will build my citadel. But I want to build it myself to show to all fucking shit AI that humans can build things too, proof of work.
This wasn't built by a shitGPT... but by my own hands, human hands.
https://i.postimg.cc/pdr6VD3L/citadel-lvl-14.jpg
Nowadays we are seeing even that even for an meaningless fried eggs recipe, people are asking shitGPT (open or closed source they do not care) how to do it.
This is the world I see it, not far from now, if we continue to use this shit.
https://i.postimg.cc/SKZRVKY7/people-after-AI-2030.gif
Let's talk again in 5 years. If you will still be here... and not replaced by some kind of shit AI bot.
reply
21 sats \ 1 reply \ @k00b OP 10 Oct 2023
I see what you mean now. AI could definitely rob us of purpose because that's exactly what it's designed to do. How do we fight that though?
reply
0 sats \ 0 replies \ @DarthCoin 10 Oct 2023
I suggest you to watch that old movie, sorry documentary "Idiocracy".
The answer is there.
reply