pull down to refresh

Great piece on a non-coder (but tech-savvy guy) who decided to use LLMs to code something for a project.
138 sats \ 4 replies \ @optimism 18h
Pretty good observation about vibe breaking continuity:
The first thing that failed was not accuracy. It was continuity.
Early on, the systems felt almost magical. Ask a question, get a plausible answer. Push a little harder, get something surprisingly sophisticated. It’s easy, in that phase, to assume you’re dealing with something stable.
You’re not.
What breaks first is state. The model slowly loses track of what matters. Not dramatically. Not in a way that throws errors. It just drifts. A variable name that mattered stops mattering. A constraint that was explicit becomes optional. A file structure that was once sacred turns into a suggestion.
reply
I wonder if there is a word for this. Quality definitely seems to degrade as LLMs try to keep more information in their contextual memory.
I find that results are almost always better when you start fresh. Make the LLM forget its context. Have it re-read the relevant code and start from a fresh prompt, and you get better results.
reply
44 sats \ 0 replies \ @optimism 17h
I wonder if there is a word for this.
Dementia?
Make the LLM forget its context.
Yes. I do this all the time. Compaction is death, simply because the compaction mechanism... sucks 1. It's probably a science all by itself to compact knowledge, and if there is ever a working brainzip -9 that is readable and indexable, then I really want that in my neuralink, lol.

Footnotes

  1. At least it is on cursor / cline / roo / claude code. Haven't tested codex or gemini cli that deeply, but these clients are all open source so we can bet on them all shining and sucking equally - no way this industry would let a competitor keep a moat in visible code.
reply
100 sats \ 1 reply \ @winteryeti 17h
Oh yes, I've seen this a lot working in Java and LLMs. You start off fine, good beginning structure. However, once you start working on tweaks and pushing for detail, it forgets the original structure and creates whole new bugs. At various times I've seen it get lost and forget the original program in the first place. You fix that by starting a new discussion with the latest working code and build from there. Iteration is the band-aid to LLM AIs getting distracted.
reply
18 sats \ 0 replies \ @optimism 17h
Happens in all languages. I've seen it happen in python and javascript too, even though those are the supposed languages of choice for LLMs today.
But you made me realize something just now: I haven't once tested Java coding with an LLM! Haven't even thought of it. hmmm.
reply
11 sats \ 1 reply \ @OT 17h
Great read. He was persistent wasn't he?
I give up pretty early when something isn't working. I keep going back to I could be doing something more productive with my time. I'll come back to vibe coding ideas when LLM's better understand what I want done.
reply
0 sats \ 0 replies \ @Car 14h
Sure was, all because of 🥡🥢
Kind of makes sense, man hunts for food, find what he likes, man loses food, try's to understand why he lost it, when he should just find another food source.
reply
21 sats \ 1 reply \ @Car 15h
Pretty great explanation.
Vibecoding is what happens when you don’t actually know how to build something, but you know what the outcome should look like, and you refuse to stop asking questions until the shape emerges.
reply
It is spot on. I mean, if don't know the difference, how would you tell if your program was supposed to use recursion for a faster result than just using a complicated loop in a loop?
reply
The AI wasn’t an author anymore. It was a very capable junior collaborator who needed constant context and firm boundaries.
Imagine now the horror that ends up on my desk when it is an actual junior collaborator (read an undergrad/PhD, or a postdoc from a country where degrees are given out to anyone paying the fee) who starts using LLMs to vibecode.
reply
LLM itself is an advanced coding concept.
reply