pull down to refresh
Well, let's say it is less likely for a big cloud provider to leak customers chatlogs vs a random node on the internet.
I think for this to be somewhat a little bit more private, you could run as many layers as possible on the user machine (or your centralized gateway) and then offload the rest to a group of nodes, so that a single node does not run the inference from start to finish.
I'm new on stacker news
Welcome! By the way, I noticed your reply only because you've sent the zap.
Since your comment was a freebie it was hidden automatically, you should connect a wallet or buy some credits on sn to make sure you reach people you reply to
reply
Great questions,
Nodes are just processing random inference from random sources so the inference queries are potentially visible from the node runner's point of view, just like they already are when using any cloud inference today on all the main platforms, the advantage here is that there is no direct connection between the identity of the inference buyer and the node runner, like you would have on a centralized platform.
I'll do some research to see if there is a way to embed encryption at some levels on the inference stack ...
Both your question are very good and thoughtful thanks for contributing! I'm new on stacker news, let me see if I can find a way to send some Sats your way : )