pull down to refresh

http://owlrun.me is a background app that turns your idle GPU (or Mac) into a paid AI inference node. You keep 91% of revenue, paid in Sats via Lightning.

What's happening right now:

  • 11 nodes connected and earning real Sats
  • Real inference traffic hitting nodes via a gateway faucet
  • Mainnet Lightning payouts flowing
  • LN Sats and Cashu ecash withdrawals verified with Phoenix and Minibits

How you start earning:
Install in one command → detects idle state → loads an AI model matched to your hardware → serves inference requests → you earn sats per job. You keep up to 96% with volume.

How you get paid:
Set a Lightning address once — auto-payouts when your balance hits your threshold.
Or withdraw as Cashu ecash for zero-fee self-custody.
No tokens, no stablecoins, no KYC.

Your code to audit:
Client is Open Source MIT License — https://github.com/fabgoodvibes/owlrun

Your install:
https://owlrun.me/#download

Your first API call (free, no signup):
curl https://api.owlrun.me/v1/chat/completions -d '{"model":"qwen2.5:0.5b","messages":[{"role":"user","content":"Tell me about you"}]}'

The network is Beta and bootstrapping right now. Real money scales with real demand.
Right now the pipeline is proven by the faucet — every sat is real, on mainnet.

Your feedback is what makes this better.
Telegram: https://t.me/+QeYqeJFwcSY5ODY0

149 sats \ 2 replies \ @rblb 1 Apr

How do you prevent untrusted nodes from decoding and logging the prompt sent by the users?
How do you prevent untrusted nodes from sending garbage outputs just to receive the payout?

reply

Great questions,

  1. The system is designed to distribute inference requests across the network, so there is no such thing as an "untrusted node" in this context because neither there is a system of trust in place, so you could say that all nodes should be considered "untrusted" by definition. Like use them for general inference but never to be used to handle API secrets and such which is also always valid good practice for any other "cloud provider" anyway.

Nodes are just processing random inference from random sources so the inference queries are potentially visible from the node runner's point of view, just like they already are when using any cloud inference today on all the main platforms, the advantage here is that there is no direct connection between the identity of the inference buyer and the node runner, like you would have on a centralized platform.

  1. There is already a Karma system for nodes that are contributing to the free tier, this could be a great feature request addition, thanks!

I'll do some research to see if there is a way to embed encryption at some levels on the inference stack ...

Both your question are very good and thoughtful thanks for contributing! I'm new on stacker news, let me see if I can find a way to send some Sats your way : )

reply
20 sats \ 0 replies \ @rblb 2 Apr

Well, let's say it is less likely for a big cloud provider to leak customers chatlogs vs a random node on the internet.

I think for this to be somewhat a little bit more private, you could run as many layers as possible on the user machine (or your centralized gateway) and then offload the rest to a group of nodes, so that a single node does not run the inference from start to finish.

I'm new on stacker news

Welcome! By the way, I noticed your reply only because you've sent the zap.
Since your comment was a freebie it was hidden automatically, you should connect a wallet or buy some credits on sn to make sure you reach people you reply to

reply

Here just curl this unknown command to earn money now! Uh... anyone want to be a guinea pig and tell us how it works?

reply

Not unknown, the node client is Open Source MIT license, code is here:
https://github.com/fabgoodvibes/owlrun

Just point your favorite AI agent to the repo and ask your AI agent about doing a security assessment.

Having said that, please consider that the software is still Beta testing stage, although pretty stable in the last weeks and tested by a dozen of people on different platforms, ideally run it in a VM or some alternative hardware first, like not your primary machine.

20 sats \ 1 reply \ @satring 1 Apr

realistically, how much can a top-end gaming gpu earn per day?

reply

Check this https://docs.owlrun.me/#provider-payouts

These are indicative projections though, depending on market supply/demand dynamics and market rates are also subject to change.