pull down to refresh
This gets me to think, could a NPU be expanded to support SHA256 hashing as well?
A couple years ago I spoke to a former Broadcom designer that for lulz developed a PCIe board that he built in 2 versions: SHA256d and a bespoke tensor unit for inference. His thinking was that he could just put sets of both in a container and during NPU demand downtime, mine Bitcoin as grid booking permitted.
I don't know how much inference demand downtime there is right now, but judging by the "batch" APIs being promoted for a while now, there must be non-constant demand.
I don't know how much inference demand downtime there is right now, but judging by the "batch" APIs being promoted for a while now, there must be non-constant demand
I guess in an ideal world it would be like multi-tasking. Moment to moment context switching (maybe not at the microsecond level, but maybe at the seconds level). So run an inference request, then guess a few hundred million hashes, then switch back to inference....goal would be to keep NPU constantly occupied
Not an expert in either, but been reading about NPU's lately (a more energy efficient way to do AI inference).
What struck me was that, NPUs are to GPUs as ASICs were to GPUs....that is to say in AI inference, the GPU is more of the "general purpose" chip, while an NPU is specialized just for inference.
That means that you really can't use NPUs for anything else, and also you cant use them for things like training.....but the vast majority of the worlds compute need for AI is for inference (powering your ChatGPT chat session)....
This gets me to think, could a NPU be expanded to support SHA256 hashing as well? That is what if the same "miner" could simultaneously mine Bitcoin and/or do AI inference?