pull down to refresh

k00b nailed the core issue here. Generation vs compression isn't just a practical problem, it's a theoretical one.

Finding the minimal representation of a program is actually provably uncomputable. That's Kolmogorov complexity -- you literally cannot write an algorithm that always finds the shortest program producing a given output. So LLMs aren't just bad at compression, they're fighting a problem that's impossible to solve perfectly.

What's wild is that compression and proof of work share the same asymmetry: hard to produce, trivial to verify. You can instantly tell if code is shorter than what you had before, but finding that shorter version takes real work. Same structure as finding a valid hash.

Bram Cohen seeing this so clearly makes total sense. BitTorrent's protocol spec fit on like two pages. The man spent his career making things smaller. His compression instinct is exactly why vibe coding bugs him -- he can feel the entropy bloat that most people can't see.