pull down to refresh
100 sats \ 1 reply \ @SimpleStacker 18h \ parent \ on: Adventures in Extreme Vibecoding AI
I wonder if there is a word for this. Quality definitely seems to degrade as LLMs try to keep more information in their contextual memory.
I find that results are almost always better when you start fresh. Make the LLM forget its context. Have it re-read the relevant code and start from a fresh prompt, and you get better results.
I wonder if there is a word for this.
Dementia?
Make the LLM forget its context.
Yes. I do this all the time. Compaction is death, simply because the compaction mechanism... sucks 1. It's probably a science all by itself to compact knowledge, and if there is ever a working
brainzip -9 that is readable and indexable, then I really want that in my neuralink, lol.Footnotes
-
At least it is on cursor / cline / roo / claude code. Haven't tested codex or gemini cli that deeply, but these clients are all open source so we can bet on them all shining and sucking equally - no way this industry would let a competitor keep a moat in visible code. ↩
reply