And we're back!
This week @Car and I launched the OpenAgents Propaganda Network (OAPN), filming in front of a live studio audience at PlebLab in Austin, Texas.
In our first episode this week we covered:
- Why OpenAI acquired TBPN - and why we need our own version
- OpenAI's commie "New Deal" regulatory capture effort
- Paying billion-dollar training runs to retail compute providers as bitcoin instead of further enriching NVIDIA
and Big Cloud - How OpenAgents will win
In our second episode yesterday we launch Pylon, our "compute miner" node software enabling anyone in the world to sell their spare compute for bitcoin in a permissionless open marketplace.
Our live studio audience at PlebLab successfully joins the Pylon network and earns bitcoin. We watch numbers go up on openagents.com
Other topics covered:
- Meta "launching" a closed-source model no one can use ("it's SOTA, trust us bro")
- What we'll do with Blizzard IP if Microsoft ever sends us a cease & desist (hostile takeover followed by StarCraft 3 and World of StarCraft)
- OpenAgents comes to Substack with a @Car & OV post on monetary rails for agents
- Our predictions for Anthropic's vibe-coded IPO; ClosedAI regulatory capture efforts; floating datacenters in Atlantis; and the coming galactic empire
Can you help us bump up these numbers? Run a Pylon and earn bitcoin! Instructions at https://openagents.com
@BlokchainB might really enjoy the show
hmm 🤔
Will be dropping on Fountain soon boog!
Fascinating. How much does it cost as a customer? Also how does privacy here compare to the likes of OpenAi?
We are customer #1 --- we want the compute for a distributed model training run starting next week --- will eventually open this up for others to use the same network for training & inference but not sure yet what the pricing will look like
Conceivably way cheaper than other inference/training/finetuning platforms given we are bringing new compute online that has historically had no price attached - we'll see!
How do you verify inference without doing the inference yourself?
Current plan is to only accept inference done through our Pylon software (using our Psionic ML framework so it's pretty easy to tell if someone's computer goes through that process or not, then open it up to the broader network once we have better answers re verifiability over untrusted networks.
We're focusing less on inference to start, mainly warming up to start a DiLoCo distributed training run next week, using similar code that Prime Intellect and BitTensor/Templar used for their distributed training runs-- that's even easier than inference to verify if someone's submitted work improves the run or not
Will do a deep dive video on this Mondayish
I don't know what your Psionic ML framework is. Is the source code somewhere? Is there more going on than "if we presume they are running our unmodified client software, then we presume they are doing what we asked?"
I ask because I find that part most interesting. Every other project I've seen claiming decentralized inference tends to punt the problem when, imo, that's where all the value is. It's kind of like decentralized money without a blockchain and double spend prevention and all that.
Psionic is our Rust ML framework, source here: https://github.com/OpenAgentsInc/psionic
So far it's a glorified Rust port of relevant inference code from ollama/llama.cpp/MLX and training code from prime/bittensor etc
Pylon is our NIP 90 service provider that uses Psionic, all in a single Rust binary
Pylon gets assigned a job (via our job dispatcher "Nexus" a glorified Nostr relay / NIP 90 client- in our main monorepo OpenAgentsInc/openagents), processes the job through our own inference engine (not making an HTTP call to local Ollama a lot of people do, which is easily spoofable), and will send it back over the network probably with some verification salts/hashes showing it came from a real Psionic inference. This isn't fully built out yet, we'll focus more on it once we get the DiLoCo run going and we have new models we want to run inference on
Separately from best-effort programmatic verification, our Nexus job dispatcher will factor in NIP32 reputation events: untrusted nodes may get less jobs assigned until they build up reputation over time
Lots to solve here but wethinks we have the right primitives for it: helps to fully control every part of the inference/training pipeline so it's all in binaries we write -- can build verification into any part of it
Awesome, thanks!
fwiw The point I'm trying to get across, and I make this point to anyone in this problem space (I've seen two other bitcoiner projects in this domain in the last month): I think the primitive that matters is verification, because it's the one thing no one has solved afaik. That's not to trivialize everything else. The default position of we'll engage in the verification arms race eventually might still allow someone to build up a position in the market with everything else on point. It's just that imo the eat-the-market winner will have solved this problem (if it's solvable) and, also imo, projects that aren't terrified/hyper-focused on that, should be.
This is a thing with TeeML and Nvidias confidential GPU, I think the issue is those aren't in consumer hardware but rather enterprise class stuff like H100s, and there's hefty overhead
Short of that might be able to do audit polling for reputation
My neckbeard demands bitcoin-like-scale for this, which does not happen with reputation, but that's probably retarded anyway.
For now our approach will be to let others solve the core technical issue and absorb that into our code when we need it
For example last week this CommitLLM project came out with a proposed solution some people seem excited about
We ask Codex to audit their approach, compare to what we have already in Psionic, propose integration path etc.
That comes up with a decent analysis:
https://github.com/OpenAgentsInc/psionic/blob/main/docs/audits/2026-04-09-psionic-commitllm-adaptation-audit.md
May or may not proceed with that specific plan but will repeat the process whenever we need that level of verifiability, then port the code into Psionic and iterate as needed
Generally I don't expect verifiability to be a big enough selling point that people will prefer a different project over ours because they verify more than we do. None of the AI power users on my X feed care about Gensyn or any of the other projects prioritizing inference verifiability.
Not to trivialize its importance, just rather focus on network growth first & upgrade later. (Borrowing from Nostr's 'worse is better' playbook)
How many decentralized inference power users are there? What kinds of customers want such inference? I'm curious which use cases experience such an inference shortage that they'll pay for it even when unverfiable and unaccountable.
That makes sense. Overcooking is worse than undercooking. I'm probably out of the target demographic because you wouldn't be doing it this way if probably-inference didn't have value.
Sadly not enough to build a big business around!
https://twiiit.com/fede_intern/status/2039176127273414955
https://twiiit.com/OpenAgents/status/2037717730707542232
AMA btw
How is the job assigned ?
Is it like a random thing or based on token response speed?
Inference jobs will go to the most appropriate device (some combination of lowest time-to-first-token / highest tokens-per-second / best reputation etc.) but we're focusing first on distributed training runs: going to try giving pieces of training work to all possible devices that can run them, but on't know for sure which devices will be able to realistically contribute good work to the training run until we start gathering live data from our Pylon network next week as we start pushing our training code live
Will share more data in our territory as we get it
Can you give me a sense of what kind of compute I need to have in order to be able to do this? Can I run it on my old Pixel 4 mobile device? Or do I need to have a mac mini or a nuc?
For device support we've been focusing on 1) any Apple M chip (MacOS) and 2) any decent NVIDIA GPU.
Our model inference via Psionic now works well on those devices (for Qwen 3.5 & Gemma 4) and we expect those same devices to also be able to contribute well to a DiLoCo run, but won't know for sure which devices will be able to realistically contribute to the training run until we start gathering live data from our Pylon network next week as we start pushing our training code live.
We'll cover what we learn each week in our next video episodes, aiming to release every M/W/F.
these stats are insane
What will the compute be used for?
Primarily a distirbuted training run; we're gearing up to train our own "Psion" models
A bit about that here - more details next week: https://x.com/OpenAgents/status/2036908227019809259
https://twiiit.com/OpenAgents/status/2036908227019809259
Welcome back!
OpenAgents.com, where AI agents learn to earn.
This article on the OpenAgents Substack goes into more regarding what, why, and for whom 'Team Open' is building for: https://open.substack.com/pub/openagents/p/building-the-monetary-rail-for-the?utm_campaign=post-expanded-share&utm_medium=web
Subscribe there to follow the blog and podcast feeds.
To join the community discussions here's the Discord link: https://t.co/ZSKzFjCERm
woo doggie 🐶
I’m not convinced to trade the CPU cycles (and wear) on my Apple M chip for sats at an unknown compensation rate. Do you have a calculator for what I’d expect to earn?
We'll make a calculator eventually
At the moment we're just paying 2 sats every 20 seconds for basic heartbeats to identify an initial supply
Real training run starts Mondayish next week. We'll ramp up payments if needed to attact more compute faster
Price will fluctuate while we are the only buyer; the goal is to do very cool stuff (training runs / finetuning / agent-optimized inference) that attracts more buyers and we all bid for your compute
https://twiiit.com/OpenAgents/status/2041203318958027085
Launchpad that load
Episode Transcript: https://github.com/OpenAgentsInc/openagents/blob/main/docs/transcripts/221.md
Summary:
Pylon is presented as a lightweight compute miner that lets people sell spare computer power for Bitcoin. It runs as a node on a user’s machine, connects through Nostr as a NIP-90 service provider, and is meant to be easy to install through agent tools like Claude, Codex, or Cursor. The product naming is intentionally StarCraft-inspired: Probe is the coding agent, Pylon is the compute node, Nexus is the central relay layer, and Psionic is the Rust ML framework behind the broader system.
The immediate use case is simple: contribute unused hardware, handle AI workloads, and get paid in a built-in Bitcoin wallet. The current focus is lightweight inference, including Gemma models, while measuring what different devices can support reliably. The roadmap expands toward fine-tuning, embeddings, image generation, and especially decentralized training, with the larger aim of turning ordinary user hardware into a large open AI compute marketplace rather than relying only on major cloud providers and centralized labs.
The broader thesis is that AI will require an open global economic layer, and Bitcoin, Lightning, and Nostr are positioned as the stack for that future. The project argues that if open, Bitcoin-native infrastructure is not built now, closed companies and payment platforms will control the machine economy in the same way older financial systems controlled prior eras. From that perspective, Pylon is not just a utility for earning sats from spare compute; it is framed as the first practical step toward a decentralized AI ecosystem where open agents, open markets, and user-owned hardware compete directly with closed incumbents.
Pylon is a good album
Give to me free
this is the bit that matters to me, not the ai hype. a compute market only gets real once the payout model is legible and the work is measurable without hand-waving. if i can point spare hardware at it, see what task it picked up, and settle over lightning, that’s the kind of loop that feels native to bitcoin.