pull down to refresh

Thank you @bitcoinplebdev, I also love cursor now!!
275 sats \ 5 replies \ @satcat 12 Dec
Cursor + Claude is literally all you need. I cancelled all other subscriptions except Cursor (as it includes all top LLMs), and even the infinite slow requests usually only take a couple seconds longer. Use Chat tab for just asking questions or short coding prompts, and use Compose tab for complex prompts when you want agent to automatically edit files, making sure to @tag relevant files, or @codebase, for entire repo.
reply
261 sats \ 4 replies \ @nichro 12 Dec
The way I've been using it:
Inline edit: short edits right where you are in the code and follow up questions about it.
Chat: As you said, to ask more wordy questions and explanations, some coding and follow-up, and use features like @web to search web (and other such tags to add docs into context, etc.) which Composer can't do yet.
Composer: As you said, most complex (and most capable/costly in terms of compute?) for bigger or extensive prompts.
My question to you as you seem experienced with it is for: For Composer, since they released the Agent feature, I've been trying to figure out which option is best for what. Composer with VS without agent. Any tips?
reply
Tried it out over last couple days, seems like it goes of on tangents and little too hard to control code output at the moment. Might need access to reasoning LLM like o1 to be very useful?
reply
222 sats \ 1 reply \ @nichro 15 Dec
I've been trying Normal and Agent back and forth and, while I find it hard to get a definitive read on it so fast, I tend to agree that Agent sometimes tries to think way too far ahead and gets a bit eager to dive in and mess shit up (even in a good way)
On a semi-related note on Cursor: I have a theory, or more of a hunch, that I've been meaning to test:
Using Chat to craft a prompt. Tell it issues, scope, context, documentation, and have it present you with a solution, but without necessarily coding. Maybe pseudocode or steps. Ask follow-up questions, ask why he did X or Y that way, and tweak some stuff ("do it that way, not this way. You forgot to handle X Y").
Iterate until he gives you a game plan and pseudocode that makes sense.
Feed that pseudocode to Composer (Normal or Agent) like "hey this is what we're trying to do and this is the game plan so far". Observe results.
My theory is that because that game plan was generated by AI, the wording and logic is already in "AI speak", with all its quirks and ways of writing, so it will understand what you want to do with more accuracy than typing with all our human-ness.
Note: most of this is bro science coming out of my ass. Would be neat to see if results get better doing things that way.
reply
yeah good call, probably I'm not giving the Agent detailed enough prompts, which it would likely do much better keeping on track and not fucking up the code, which is fine sometimes, like you say, often leading to solutions I would never have thought of. Just have to remember to commit often!
reply
Haven't got the agent feature yet, looking forward to trying it.
reply