pull down to refresh

"AI solutions that are almost right, but not quite" lead to more debugging work.
AI tools are widely used by software developers, but those devs and their managers are still grappling with figuring out how exactly to best put the tools to use, with growing pains emerging along the way.
That's the takeaway from the latest survey of 49,000 professional developers by community and information hub Stack Overflow, which itself has been heavily impacted by the addition of large language models (LLMs) to developer workflows.
The survey found that four in five developers use AI tools in their workflow in 2025—a portion that has been rapidly growing in recent years. That said, "trust in the accuracy of AI has fallen from 40 percent in previous years to just 29 percent this year."
The disparity between those two metrics illustrates the evolving and complex impact of AI tools like GitHub Copilot or Cursor on the profession. There's relatively little debate among developers that the tools are or ought to be useful, but people are still figuring out what the best applications (and limits) are.
59 sats \ 6 replies \ @optimism 15h
Does this show that most people assign trust before really knowing? Trust, don't verify?
reply
I think the initial hype around AI might be fading in this area. Maybe people thought AI was gonna be like some kind of god, but as they use it more and see the actual results, that trust seems to be dropping. Personally, I don’t use AI much for coding, but it has come in really handy in a few specific situations.
reply
172 sats \ 0 replies \ @optimism 12h
I'm personally only using it for experiments and I continuously get disappointed, but this is how we learn. I feel more optimistic about LLMs than I was during peak hype, there is definite progress being made, but I see several issues still that need to be addressed: quality (I can't help but feel that there's still a massive garbage in, garbage out issue that gets attempted to be suppressed with RL), cost (Claude 4 Sonnet test cost me approximately 10k sats for a minor bug fix, 50k sats for a half-coded feature that still has bugs) and UX (chatbots are the worst interface.)
reply
It's because they didn't know their subject matter before they used AI.
AI requires supervision by someone who can recognize the mistakes.
reply
10 sats \ 2 replies \ @optimism 12h
So basically you're saying that the shills were noobs? I sort of agree. The loudest voices were originating from people that have no proven track records, just lots of fanbois in their following. There are a few exceptions but these people all have AI products so they're riding the sales wave.
Most expert coders I know (I don't consider myself an expert coder despite over 30 years of experience in very large projects both in FOSS and commercially, because I've pivoted to dev leadership) are about as (or more) skeptical as I am, while open to try new things. I haven't heard much shilling coming from these people, but when they
reply
Yes when they use AI they won't be bamboozled. ;-)
reply
That meant to say "but when they recommend something I try it out"
reply