894 sats \ 0 replies \ @SimpleStacker 10 Oct 2023 \ parent \ on: Big Tech Struggles to Turn AI Hype Into Profits tech
Based on what I know of these model architectures, compute costs should scale linearly with the number of requests (or more precisely, the number of batches since TPUs will process requests in parallel)
There could be other issues regarding concurrency, latency, congestion, etc. Or maybe there are other physical limitations regarding hardware. But just on the model itself I don't see why it should super-linear in the number of requests. If I'm wrong I'd be happy to know it though.