The problem: We've had L402 podcast search primitives (semantic search, chapter discovery, transcription) live for months. People wanted the data but struggled to implement orchestration logic. Which primitives do I call? In what order? How do I synthesize results?
Agents need orchestration, not just primitives.
The solution: Natural language agent endpoint that handles the orchestration for you.
Send: "Find authoritative sources on BOLT12 and blinded paths."
Get back: Structured JSON with exact clips, timestamps, transcripts from millions of podcast moments. ~140 sats per call.
Interactive Playlist link: https://www.pullthatupjamie.ai/app?researchSessionId=69e92775ccfd4bbf5e25e948
Under the hood:
- Agent parses natural language → plans which primitives to call (semantic search, chapter search, etc.)
- Executes primitives in parallel where possible
- Synthesizes results into structured output
- Returns agent-parseable JSON (episode GUIDs, timestamps, audio URLs) + human-readable text and hints on appopriate next steps
All primitives are still L402 endpoints you can call directly. But most users want the orchestrated flow, not primitive composition.
Why this matters:
• L402 payment happens once, orchestration layer uses your balance for multiple primitive calls
• Macaroon caveats work across all primitives (spending limits, time bounds, scope restrictions)
• SSE streaming shows which primitives are being called in real-time
• Standard HTTP - works with any agent framework that can handle 402 responses
Try it Our Reference Client: https://www.pullthatupjamie.ai/app?view=agent
For builders: OpenAPI spec + integration guide at https://pullthatupjamie.ai/llms.txt
More client implementations coming soon. Interested in feedback from anyone building L402 agent infrastructure.
Full writeup: https://www.pullthatupjamie.ai/blog/openclaw-is-great-hosting-paying-the-bill-arent-so-i-built-jamie-pull-20260421
Future Dev:
• Create: Video/audio clip primitives with burned-in captions. Point at timestamp, get shareable media in seconds.
• Publish: One-call cross-posting to Nostr + Twitter with timestamp links. Research → clip → scheduled publish, all L402.
• Worker: Async job primitives for long-running tasks. Standing research briefs, recurring digests, scheduled clip drops. Nothing blocks your agent, results push when complete.
deleted by author