pull down to refresh

Directions like this are interesting. I'm always wondering why the agent predicted this would be the answer or info I want to get. Or expect to get. It has to be related to the training phase but that doesn't really answer the question of why.
The solution to problems like this are pretty simple by giving the agent context. I've found the Anthropic approach with Claude code is much more effective for my workflows. Still learning how to best use the tools but things like this are something you have to figure out. It seems to me the best path is to make it less uncertain to the agent what you actually want to get. The less guessing the better.
A colleague of mine believe FP (functional programming) is a paradigm that will work better with coding assistants since the flows have less room for "creativity". And it can be easier to understand as a reader of the code.
Exactly. And apart from poor results and wasted time vague prompts or large blobs of info to process drain tokens -> funds, especially with thinking models designed to rid you of the need to think.
I shared my experience on this not so long ago here: #1075448
reply