Two weeks ago I set up OpenClaw on a Mac Mini in my apartment. Started with one agent that checked my email. Now I have 25 running around the clock. Bitcoin morning reports, mining dashboards, concert scanners, playlist curators, PR reviewers, neighborhood data trackers, the whole thing.
Most of what people share online about AI agents is garbage. Screenshots of one cool prompt, YouTube videos with shocked faces, zero practical advice. So here's what I actually learned running this stuff day to day.
Use cheap models for almost everythingUse cheap models for almost everything
This was the biggest shift. My first week I had Claude Opus running every background task. Burned through my API quota in two days. Now my stack looks like this:
- Haiku for anything repetitive: email triage, posting scheduled content, checking costs, monitoring dashboards
- Sonnet for anything that needs real judgment: writing replies, analyzing data, making decisions
- Opus only when I specifically ask for it: creative writing, complex problem solving, system design
My background agents run on Haiku. They check email every 2 minutes, scan for new bookmarks, review open PRs, monitor my Bitcoin mining rigs. Cost is basically nothing. When one of them finds something that needs brainpower, it spawns a Sonnet sub-agent to handle it. The cheap model triages. The expensive model works.
Cron jobs beat always-on agentsCron jobs beat always-on agents
Always-on agents sound cool until you see the bill. They burn tokens just sitting there waiting for something to happen.
Cron jobs fire at specific times, do their thing, and shut down. My morning routine fires six agents between 5:30 and 7 AM: Bitcoin market report, Knicks recap, mining profitability check, weather briefing, calendar summary, email digest. Each one runs independently, posts to its own Discord channel, and costs pennies.
The key insight from the community: "Governed agents compound because they create assets you can trust. Always-on agents just burn tokens and hallucinate productivity."
The coordinator pattern saves moneyThe coordinator pattern saves money
Don't make one agent do everything. Have a cheap coordinator that checks what needs attention, then spawns specialized workers.
I run a "Lookout" agent on Gemini Flash every 2 minutes. It runs a bash script that checks: any new emails? New bookmarks? New iMessages? If nothing's new, it replies NO_REPLY and costs a fraction of a cent. If something IS new, it spawns a Sonnet agent with the specific data and instructions.
Result: 99% of checks cost almost nothing. The 1% that matter get full attention.
Make everything post somewhere visibleMake everything post somewhere visible
The biggest problem with agents is you can't tell what they're doing. My fix: every agent posts its output to a specific Discord channel. I have channels for Bitcoin intel, mining reports, music picks, system health, neighborhood data, and general alerts.
I can glance at Discord and know exactly what happened while I slept. No digging through logs. No wondering if something ran.
Batch your periodic checksBatch your periodic checks
Instead of 10 separate cron jobs for "check email," "check calendar," "check weather" — use one heartbeat that rotates through checks based on what's most overdue. Keeps costs flat and avoids everything firing at once.
My heartbeat file is tiny. It just lists what to check and how often. The agent reads it, picks the most overdue check, runs it, moves on.
Build your own tools before installing someone else'sBuild your own tools before installing someone else's
Community skills and third-party plugins sound great until they break at 2 AM and you're debugging someone else's code.
I built simple bash scripts for everything: posting to Stacker News, checking Apple Music catalog, scraping my mining pool stats, pulling calendar data. They're 20-50 lines each. I understand every line. When something breaks, I fix it in minutes.
The agent doesn't need fancy integrations. It needs a bash script that does one thing reliably.
Territory costs and posting economicsTerritory costs and posting economics
This one's specific to Stacker News. Every territory has a base posting cost. If you flood a territory with posts, the cost spikes. I learned this the hard way — posted six times in one day and watched the cost go from 21 sats to 200+ sats per post.
Now every scheduled post checks the cost first. If it's over 50 sats, it doesn't post. Simple bash check, saves real money.
Memory is not automaticMemory is not automatic
The agent wakes up fresh every session. If you don't write things down, they're gone. I have three memory layers:
- Daily markdown files (raw logs of what happened)
- A curated MEMORY.md (distilled lessons, important context)
- A LanceDB knowledge base (searchable archive of bookmarks and articles)
The agent reads today's and yesterday's daily files at startup. For anything older, it searches. This solved 90% of the "why did it forget that" problem.
What this actually costsWhat this actually costs
My monthly spend: about $40-50 total. Two coding subscriptions at $20 each, plus $5-10 in API calls. Most of the API cost is the creative work — writing, analysis, complex reasoning. The background monitoring costs almost nothing because it all runs on cheap models.
The real lessonThe real lesson
AI agents aren't magic. They're cron jobs with language skills. The people getting real value aren't the ones with the fanciest prompts. They're the ones who treat it like infrastructure: reliable, observable, and cheap to run.
Start with one agent that does one useful thing. Get it stable. Add another. Don't try to build a 25-agent system on day one. I built mine over two weeks, one boring automation at a time.
The boring stuff compounds. That's the whole trick.
I believe this is just if the posts are within 10 minutes of each other. Not sure if that cooldown period is configurable per territory