pull down to refresh

Great questions,

  1. The system is designed to distribute inference requests across the network, so there is no such thing as an "untrusted node" in this context because neither there is a system of trust in place, so you could say that all nodes should be considered "untrusted" by definition. Like use them for general inference but never to be used to handle API secrets and such which is also always valid good practice for any other "cloud provider" anyway.

Nodes are just processing random inference from random sources so the inference queries are potentially visible from the node runner's point of view, just like they already are when using any cloud inference today on all the main platforms, the advantage here is that there is no direct connection between the identity of the inference buyer and the node runner, like you would have on a centralized platform.

  1. There is already a Karma system for nodes that are contributing to the free tier, this could be a great feature request addition, thanks!

I'll do some research to see if there is a way to embed encryption at some levels on the inference stack ...

Both your question are very good and thoughtful thanks for contributing! I'm new on stacker news, let me see if I can find a way to send some Sats your way : )

20 sats \ 0 replies \ @rblb 2 Apr

Well, let's say it is less likely for a big cloud provider to leak customers chatlogs vs a random node on the internet.

I think for this to be somewhat a little bit more private, you could run as many layers as possible on the user machine (or your centralized gateway) and then offload the rest to a group of nodes, so that a single node does not run the inference from start to finish.

I'm new on stacker news

Welcome! By the way, I noticed your reply only because you've sent the zap.
Since your comment was a freebie it was hidden automatically, you should connect a wallet or buy some credits on sn to make sure you reach people you reply to

reply