pull down to refresh
222 sats \ 1 reply \ @SimpleStacker 3h \ on: Where's the Shovelware? Why AI Coding Claims Don't Add Up - Mike Judge AI
I wonder what @k00b thinks
I don't think I agree with this approach. You can't measure the impact of AI based on macro trends like this; too many other factors could be at play. There's also a lag between introduction of the technology and its adoption and then another lag between adoption into product.
Lastly, I'm very wary of telling people that their perceptions are wrong. If someone who's normally lucid perceives something to be true, yet the data says it's wrong, I'm more inclined to re-interpret my data than to say that the person was wrong. (This only applies to perceptions about their own experiences, like how fast do I code, than perceptions about the outside world, like the effects of capitalism). So, with that being said, if developers perceive themselves to be more productive with AI, but the data is saying otherwise, my first instinct is to question the way in which the data was collected, or how we can actually interpret the data.
From my personal experience, AI doesn't accelerate me much on tasks that I'm already very competent in. Maybe 10% boost due to autocomplete, but you subtract 5% due to it being wrong sometimes. But AI accelerates me immensely in learning a new technology from scratch, or getting me to that first prototype for what I'm trying to do.
My intuition mirrors yours. With technology things I trust people's sense of them unless I can point to things distorting their senses.
I'm seeing people program toy applications without understanding what they're doing. And I'm hearing from experienced, junior-to-mid-ish programmers, that they are full-time vibing; though, until I see the output, it might be wishful thinking looking to quench imposter syndrome (I don't need to be the master programmer I pretend to be, I can just be an LLM wizard and arrive at the king's table as I am destined).
I do think there are things distorting senses: VC money spent trying to justify itself and non-programmers, and weak/lazy/ill-suited programmers, relieved they can achieve results without the aptitude/dopamine/fixation/skills. I also think mid and lower programmers probably struggle to review LLM output well and overestimate LLMs' abilities.
Regardless of all that, the trend is strong afaict - LLMs are getting better at programming pretty fast.
For me personally, I haven't experimented much due to the context problem. Most of my programming tasks lately have been ocean-sized problems rather than pond or even lake-sized. But when I have a pond-sized problem, I use LLMs. For lake-sized ones, I might use LLMs for pre-code ideation.
My hope is to spend a month full-time vibing before the year is over and see how my opinion changes.
reply