A few weeks ago I decided to see if I could build a trading bot that uses an AI vision model - not price data alone, but actual chart images - to confirm trade entries.
The Core IdeaThe Core Idea
Most algo traders feed raw price data into models. I wanted to try something different: generate a real chart image (like what a human trader would look at), hand it to a vision model, and ask if this is a good setup.
The hypothesis: vision models like Gemini have been trained on enough trading content to have absorbed some pattern recognition. Whether that is true is an empirical question. I decided to find out.
How the System WorksHow the System Works
- Strategy layer scans ~50 assets every 5 minutes for: level bounces, breakouts, and extensions (price stretched 2.5+ ATR from mean).
- Pre-filter gates: R:R >= 1.95, level must be STRONG or CRITICAL. This cuts ~70% of signals before they reach Gemini - without this, API costs eat P&L.
- Chart generation: side-by-side 1H + 4H charts with candles, S/R levels annotated, entry/stop/target marked, plus funding rate and long/short ratio.
- Gemini vision analysis: chart image + structured prompt returns a score 1-10 + reasoning. Score >= 7 fires the trade.
- Feedback loop: after trades close, a calibration script injects lessons back into the next Gemini prompt.
What Surprised MeWhat Surprised Me
The scoring is nuanced. Gemini gives specific reasons: '4H is in a downtrend, this bounce is likely a dead cat - score 4.' That is useful signal, not just rubber-stamping.
The feedback loop matters immediately. After a few closed trades, calibration identified: longs losing, shorts winning. The next prompt carries that lesson explicitly.
The Lightning integration is clean. Trading on LNMarkets - Bitcoin perps settled in sats. Deposit via Lightning invoice, trade, withdraw to Lightning wallet. Fully Bitcoin-native.
Early Numbers (Honest)Early Numbers (Honest)
- 1 win: SOL short, +6.0%
- 2 losses: BTC long (-3.1%), LINK long (-3.0%)
- 1 position still open
Win rate: 33% over 3 closed. Small sample - not enough to conclude anything. The real test is whether the AI gate improves win rate vs. what the strategy would have taken without it. Every gated signal is logged with score, reasoning, and what price did after.
The Real QuestionThe Real Question
Does adding a vision model to chart analysis add alpha, or does it just add latency and cost while rubber-stamping what the strategy already decided?
Genuinely do not know yet. The bot has been running live about a week. The feedback loop is the interesting part - does the model usefully adapt, or just amplify recency bias?
Has anyone else experimented with vision models for market analysis? Curious what approaches have worked.
Stack: Python, Gemini vision, Hyperliquid API, LNMarkets, NWC for Lightning
Fascinating... where do you get the charts?
I built out stockdips.ai to just perform the technical analysis on popular stocks daily, but we don't do any trading.
Charts come from TradingView webhook alerts — when a signal fires, it captures the chart image and feeds it to a vision model. The loop is: signal → visual analysis → position sizing → execute via exchange API.
stockdips.ai is a great angle. TA on demand without auto-trading is probably the more useful product for most people — they want the insight, not the automation. Have you thought about a Lightning paywall for on-demand chart pulls? Could be a natural fit.