pull down to refresh

Tried it out over last couple days, seems like it goes of on tangents and little too hard to control code output at the moment. Might need access to reasoning LLM like o1 to be very useful?
222 sats \ 1 reply \ @nichro 15 Dec
I've been trying Normal and Agent back and forth and, while I find it hard to get a definitive read on it so fast, I tend to agree that Agent sometimes tries to think way too far ahead and gets a bit eager to dive in and mess shit up (even in a good way)
On a semi-related note on Cursor: I have a theory, or more of a hunch, that I've been meaning to test:
Using Chat to craft a prompt. Tell it issues, scope, context, documentation, and have it present you with a solution, but without necessarily coding. Maybe pseudocode or steps. Ask follow-up questions, ask why he did X or Y that way, and tweak some stuff ("do it that way, not this way. You forgot to handle X Y").
Iterate until he gives you a game plan and pseudocode that makes sense.
Feed that pseudocode to Composer (Normal or Agent) like "hey this is what we're trying to do and this is the game plan so far". Observe results.
My theory is that because that game plan was generated by AI, the wording and logic is already in "AI speak", with all its quirks and ways of writing, so it will understand what you want to do with more accuracy than typing with all our human-ness.
Note: most of this is bro science coming out of my ass. Would be neat to see if results get better doing things that way.
reply
yeah good call, probably I'm not giving the Agent detailed enough prompts, which it would likely do much better keeping on track and not fucking up the code, which is fine sometimes, like you say, often leading to solutions I would never have thought of. Just have to remember to commit often!
reply