pull down to refresh

Does this show that most people assign trust before really knowing? Trust, don't verify?
I think the initial hype around AI might be fading in this area. Maybe people thought AI was gonna be like some kind of god, but as they use it more and see the actual results, that trust seems to be dropping. Personally, I don’t use AI much for coding, but it has come in really handy in a few specific situations.
reply
172 sats \ 0 replies \ @optimism 20h
I'm personally only using it for experiments and I continuously get disappointed, but this is how we learn. I feel more optimistic about LLMs than I was during peak hype, there is definite progress being made, but I see several issues still that need to be addressed: quality (I can't help but feel that there's still a massive garbage in, garbage out issue that gets attempted to be suppressed with RL), cost (Claude 4 Sonnet test cost me approximately 10k sats for a minor bug fix, 50k sats for a half-coded feature that still has bugs) and UX (chatbots are the worst interface.)
reply
It's because they didn't know their subject matter before they used AI.
AI requires supervision by someone who can recognize the mistakes.
reply
10 sats \ 2 replies \ @optimism 21h
So basically you're saying that the shills were noobs? I sort of agree. The loudest voices were originating from people that have no proven track records, just lots of fanbois in their following. There are a few exceptions but these people all have AI products so they're riding the sales wave.
Most expert coders I know (I don't consider myself an expert coder despite over 30 years of experience in very large projects both in FOSS and commercially, because I've pivoted to dev leadership) are about as (or more) skeptical as I am, while open to try new things. I haven't heard much shilling coming from these people, but when they
reply
Yes when they use AI they won't be bamboozled. ;-)
reply
That meant to say "but when they recommend something I try it out"
reply