Pretty good observation about vibe breaking continuity:
The first thing that failed was not accuracy. It was continuity.
Early on, the systems felt almost magical. Ask a question, get a plausible answer. Push a little harder, get something surprisingly sophisticated. It’s easy, in that phase, to assume you’re dealing with something stable.
You’re not.
What breaks first is state. The model slowly loses track of what matters. Not dramatically. Not in a way that throws errors. It just drifts. A variable name that mattered stops mattering. A constraint that was explicit becomes optional. A file structure that was once sacred turns into a suggestion.
I wonder if there is a word for this. Quality definitely seems to degrade as LLMs try to keep more information in their contextual memory.
I find that results are almost always better when you start fresh. Make the LLM forget its context. Have it re-read the relevant code and start from a fresh prompt, and you get better results.
Yes. I do this all the time. Compaction is death, simply because the compaction mechanism... sucks 1. It's probably a science all by itself to compact knowledge, and if there is ever a working brainzip -9 that is readable and indexable, then I really want that in my neuralink, lol.
Footnotes
At least it is on cursor / cline / roo / claude code. Haven't tested codex or gemini cli that deeply, but these clients are all open source so we can bet on them all shining and sucking equally - no way this industry would let a competitor keep a moat in visible code. ↩
Oh yes, I've seen this a lot working in Java and LLMs. You start off fine, good beginning structure. However, once you start working on tweaks and pushing for detail, it forgets the original structure and creates whole new bugs. At various times I've seen it get lost and forget the original program in the first place. You fix that by starting a new discussion with the latest working code and build from there. Iteration is the band-aid to LLM AIs getting distracted.
I give up pretty early when something isn't working. I keep going back to I could be doing something more productive with my time. I'll come back to vibe coding ideas when LLM's better understand what I want done.
Kind of makes sense, man hunts for food, find what he likes, man loses food, try's to understand why he lost it, when he should just find another food source.
Vibecoding is what happens when you don’t actually know how to build something, but you know what the outcome should look like, and you refuse to stop asking questions until the shape emerges.
It is spot on. I mean, if don't know the difference, how would you tell if your program was supposed to use recursion for a faster result than just using a complicated loop in a loop?
The AI wasn’t an author anymore. It was a very capable junior collaborator who needed constant context and firm boundaries.
Imagine now the horror that ends up on my desk when it is an actual junior collaborator (read an undergrad/PhD, or a postdoc from a country where degrees are given out to anyone paying the fee) who starts using LLMs to vibecode.
brainzip -9that is readable and indexable, then I really want that in my neuralink, lol.Footnotes