pull down to refresh

Curious about one thing in the dev community - if most of current apps are made with all these AI prompts, why we still call the guys managing the AI prompts "app devs"? Why not calling them prompters or PMs (project managers).

Also sometimes I think that all the grants to these projects are literally money thrown into AI and some guy just take advantage doing almost nothing. I do not see any creativity or PoW in using AI prompts for coding apps and then take pride of doing it.

These prompters are no more than bus drivers...

Why people stop doing / creating things with their brain?

150 sats \ 9 replies \ @kepford 7 Jan

This is a good exercise in thinking about this subject.

Its complicatedIts complicated

This is a simplification but I think its not overly broad.

Those that use AI coding tools are all over the place. There are the vibe coder maxis. They seem to be true believers in the AI and say just trust the AI. They seem to be naive to me. Many also seem to have VERY limited knowledge or programming in general.

Then on the other extreme you have the anti-AI crowd. Those that really seem to want the tools to NOT work so they may use them but don't really come to the tool with the mindset of how can I use this, or is this useful.

The last group I would call the practical users. Those that want to get more work done. I'm in this group. Some are skeptical of the tools like me but open minded about it. Others are really bought into AI taking over everything but have actually found the edges of what it can't do and what it doesn't do well. They are practical. They figure out workarounds. They don't care as much about the how but more about the what.

What do you call them?What do you call them?

I don't really care what you call app devs honestly. I would not hire a AI prompter that doesn't know how to build something that works. I don't care if they write code from memory or if they use AI all day. Their code needs to work. It needs to be maintainable and understandable. No slop allowed.

There are a lot of lazy peopleThere are a lot of lazy people

This was the case before AI coding tools and these tools just make it easier to fake it. But the bill always comes due. Just look at all the security holes in so much software created by large companies and even open source projects. AI is like pouring gas on the fire. Experienced devs that actually care are more valuable than ever, especially when an AI can write code much faster than a human. These tools don't know anything. They are always guessing. So you always need a human that knows what they want.

The Scammers like Scam Altman Have Poisoned the Thinking on AIThe Scammers like Scam Altman Have Poisoned the Thinking on AI

AI tools are just tools. Wild complex tools but tools. They have no desires. They can't think. They make guesses based on prompts. They do not work without prompts. They aren't employees but it can be helpful to use them in that way.

Only a fool would hire a junior dev (that's how I think of AI's) and never review their code. You can do it, but you're gonna pay for it at some point.

The way I think about itThe way I think about it

AI coding tools are just tools. Almost no one reads the code of a project. I mean normal people. What matters is does it work. Crap projects have always existed. Crap devs have always existed. AI tools just make it easier and faster to write code. Good or bad.

Think about it this way. You could build a house using no power tools. No modern tools at all. You could do that. But would that mean it is better? Maybe. But the cost would be higher. Can you build a bad home with no power tools? Absolutely. Can you build it better with power tools? Nah, but faster yes. And good enough.

The bottom line is the output. It is possible to use AI prompts to write quality code. It takes planning and forethought. You need the skill of software engineering and need to understand how AI's work (the tools). If you do, you will be far more productive with the tools than without them.

Why to Avoid AI prompting?Why to Avoid AI prompting?

If one is learning to code I think prompting is a mistake. Unlike my power tools analogy there actually is massive value in learning what readable, good code looks like. Learning different paradigms like OOP or Functional have massive value. They did before this era of AI and they do now.

Why you shouldn't avoid AI coding toolsWhy you shouldn't avoid AI coding tools

If you are senior dev I think avoiding AI tools is a mistake. It is just making you less valuable and productive. But the mistake I see most devs make is that they think its magic. They bought the lie by the pitchmen that its magic. Its not. The tools are still very poor really. They aren't smart. You have to really hold their "hands". They make stuff up all the time. They get things wrong. But, if you learn to use them they can make you much more effective and productive.

I've been using them daily for over a year now and I've been able to figure out new code bases much faster than I would have without the tools. I've been able to fix bugs and make updates to messy code bases and get stuff done. I'm not working on artisan open source projects. I'm a hired gun at a large corp doing grunt work. These code bases are not amazing. They suck in many ways already. My use of these tools has not made the code bases worse, its made them better.

That's my perspective. Maybe there is a market / desire to only use artisanal apps but I don't buy it. Even in the bitcoin world people care about one thing. Does it work. Does it get the job done. Don't buy the nonsense that there are two options. Use AI and create slop garbage or write good clean code. That's nonsense. The truth is that if a dev is using AI effectively you may never know that they used it. What we see is a bunch of hacks using these tools like idiots. That's easy to do so we see a lot of it. If someone says you don't need to learn to code, you can just use the AI their are ignorant.

Its a tool. Its not magic. Its not an intelligence.

reply
The truth is that if a dev is using AI effectively you may never know that they used it.

This is true, but then there is no objection.

reply
100 sats \ 2 replies \ @kepford 7 Jan

Exactly.

This is why I even posted on this thread. There are far to many people that are thinking in booleans about AI. Good or bad. ITs both. Humans are always the driver of both. Like most tools it just makes one more effective at creating trash or gold.

reply
141 sats \ 1 reply \ @optimism 7 Jan

I think that the main problem is the hype, spread by "gurus" that popped up overnight. People that learned the vibe coding hack, but don't have a frame of reference towards what code should look like, and how you maintain great code over decades.

For tooling, I don't care. For commercial software or SaaS, I don't care either as these are throwaway (especially now that we can just replace most of these with some GPU time.) But in a secure environment, you can't have slop; I need my secure products to be done well, and it's not just me. You don't want anyone that has any PII on you to secure that with sloppy code. There would be a lot of things that I personally cannot do if I can't have my audited, secure environment.

reply
100 sats \ 0 replies \ @kepford 7 Jan

Exactly

reply
reply

If you are right... AI will never replace anyone...

Are you a believer now?

reply

I do not like to believe in anything.
The word believe means supposition, not based on facts.

reply

Play your word games. You know what I mean.

reply

No I don't. I am not a believer.

reply
I do not see any creativity or PoW in using AI prompts for coding apps and then take pride of doing it.

Agreed. It's like you coding something for your boss, then your boss puts his name on the commit and tells everyone "I made this", but what he should be saying is "I own the copyright on this". This is also why in patents, owners and inventors are separate roles. Same applies here:

LLM made it, you own it because you paid for the tokens or electricity, and set the target. "Vibe coding" != "coding".

Why people stop doing / creating things with their brain?

Because lazy and didn't care about quality work too much. Which fwiw is 90% of the people that send pull requests on my repositories and I've pissed off a lot of people by holding them to actual standards on their code, before AI. Nowadays, I just prioritize pull requests by quality and the co-pilot ones are at the bottom.

reply

Oh so is worse than I thought, some AI bots are even making PRs to original code and if the maintainer is not paying attention, end up in something worse. LOL

idk I have the feeling that we are living in a slop world...

reply
some AI bots are even making PRs to original code

Yeah. This happens all the time now. MSGH wanted this (the last GitHub CEO was literally appointed to do this "transition".) And FOSS maintainers are dealing with the problem of getting incredibly bad pull requests.

idk I have the feeling that we are living in a slop world...

I recently had a conversation with a friend that maintains a small open source C library for his job (and does not nearly maintain code quality standards I'd personally be comfortable with) and he asked me my opinion on a PR that he felt was off. I pointed him to a Claude Code pattern and he was amazed that he'd been bullshitted by the author for weeks. I also pointed him to the wording in some of the public conversation being highly suspect of AI generated conversation. His world... sucks right now. Especially because he and the author of the slop PR work for the same company, lol.

The current LLM architecture could be used when you're willing to read and understand every line and correct every error. I fully align with Linus on "yes it's interesting, no it's not for production".

reply

Sometimes I ask myself: should I use this app if I know that most of it was coded by a LLM?

reply

Not if it has critical functions.

I wouldn't use an LLM coded OS/firmware, browser, PGP implementation, Bitcoin wallet, secure messenger, credential vault... basically anything that implements cryptography or performs generic functionality. This will get harder though: using android and/or chrome, we will probably be exposed to LLM-coded parts, as Google says 70% is now AI (which is extremely worrying depending on how they measure it, as review and testing takes more time than coding.)

I have put whitenoise on my insecure phone for testing with ek the other day, despite looking at the code and knowing that it is vibecoded. But there isn't anything on that phone that matters if it's leaked, or that cannot be wiped.

However, I desperately need help with reviewing code for my secure devices, so I am thinking about further automating code review on FOSS apps. Maybe I'd employ an LLM in that to flag up things on big diffs. I'm not super happy about this, but as code gets more voluminous, I can't keep up with all the release cycles.

reply
100 sats \ 1 reply \ @unboiled 7 Jan
Maybe I'd employ an LLM in that to flag up things on big diffs.

One good practice I had for big diffs, also prior to sloppage, was insisting on smaller, stacked PRs. Of course tests must pass for each one.

reply

Yes. On my own repos, absolutely.

In this case it's me reviewing the code for the products I want to use. Think Signal or in the LN sphere, Blixt/Zeus. So I generally deal with tag..tag post-release, pre-install. I don't mind big diffs, what I really mind is 60 dependencies that you have to go through, like how Proton is structured, and then refactors happening on these.

FWIW, I sometimes run into the same issue with Bitcoin Core's refactoring policy.

reply

IMHO the best "AI" use case is for memes (especially cats memes).

reply
146 sats \ 3 replies \ @Fenix 7 Jan

Is it true that these vibe codes are full of crap that can become a problem to manage and maintain throughout the code's lifespan?

reply

Yes, but even if they aren't it's cognitive debt from a maintainer perspective if you don't understand everything it does. The advice many of the vibe kings give you is to just accept everything and let the LLM solve everything, including bugs it coded itself.

That means you know nothing, and you'll be happy, until it nips you in the butt with a bug it gets stuck on, and then you're fucked, because now you have to go through 100k lines of slop and try to make sense of it (which is really, really hard.)

reply
100 sats \ 1 reply \ @Fenix 7 Jan

Like a ticking time bomb. I was really surprised by this. I thought these "devs" at least somewhat kept up with the AI's work. Now that I know they don't read, just like people who ask you to summarize an email and don't read it, I've completely lost respect. One more checkpoint for when I use someone's project:

  • FOSS (check)
  • No Vibe Code (check)
reply

What could work though is that you take some vibe coded thing and you let an LLM code something up for you based on it. Don't publish the resulting product, because you don't want to be held liable (not even legally; reputationally) for stuff you don't understand. Just use it to your own advantage.

reply

I think you're missing the point. It's often difficult to assess whether an idea for a project, game, library, app, etc., that you have is actually worth the time to develop. We often have to build prototypes or MVP's to make this assessment. That takes time. AI is a tool that greatly expedites this iterative process.

AI is a tool. Like everything else, it should be used responsibly. Some people will use it to create brain rot content, others to learn, or to more quickly iterate on creative ideas. We just need to learn how to use it responsibly, effectively and (like everything else) in moderation.

reply
reply

If there's no AI why did you write this post?

It seems to me you are humanizing AI and buying into the pitchmen if you say the prompter is like a PM and the AI is writing the code. Right?

I mean, the AI is guessing the code. That is what is happening. And if the prompter doesn't understand what it is doing they are little more than a PM. But many product owners and even PMs were once coders and do know what they are doing. They can write code but moved up to also organize and lead projects. Get more people involved.

When I hear people speak about AI it is more often than not clear to me they haven't actually spent time to understand the computer science behind LLMs, and chatbots. When you do you realize that Scam Altman is lying to you and also so are most haters of AI. Well, the haters are just ignorant.

Its an incredible bit of software engineering these LLMs. They have a place. Its not to think for us but like most tools they can be used effectively and poorly. We are seeing more poorly than effectively mostly due to laziness and the issues with LLMs.

In the end, the problem is really dumb people. Dumb for falling for the scam and dumb for creating slop and thinking they are amazing. You don't have a create slop with AI tools. Its just easier to do it than not.

reply

The post is about why do we call devs the ones that are just prompters.
If for example here on SN people do not like to see posts written by so called LLMs, then why do we accept also apps written by the same LLM, in this case are not shitGPT anymore, are called "tools".

Another example: are you going to pay the same amount for a piece or a toy made with a 3D printer as of one made by hand?

reply

I think its about more than that. Its about a mischaracterization of the use of AI being a boolean.

There have always been fakers. Now its just easier. People that copy and paste. AI does this much better. The results are what matter and you don't get good results from a prompter that isn't actually a dev. These tools are REALLY dumb. But good at guessing and the more clear instructions and guardrails you give it the better the results.

So, these prompters that create slop could be devs or not. But if we say they aren't devs that fixes what? There are terrible devs out there. There are terrible PMs. Results are the metric.

reply

I will appreciate more a junior dev that is trying to write code from his head, even that he fails, than one that just jump into a prompt to finish his slop.

Failure is a human feature and that's how we learn better, from failures, not from making things faster.

reply

Regardless of feelings the dev that is coding from his head is trying to learn and will benefit from it. AI tools can be used to accelerate learning but the temptation is to lean on them and cheat yourself out of learning.

20 years ago we didn't have all these learning tools or AI. Open source code I read helped me learn. Reading programming books. And just writing code to see if it would work. The tools have changed. Its much easier to learn now if you want to. Many are just lazy. They will pay for it in the long run.

reply
Many are just lazy. They will pay for it in the long run.

This I understand. What I do not understand is why we still call these guys "devs".

Okay. You do you.

reply
0 sats \ 3 replies \ @anon 7 Jan

If you can't program, just say that, dart. Why gatekeep when you yourself can't program?

reply

you can kiss my ass anon

reply
0 sats \ 1 reply \ @anon 7 Jan

hah! nailed it.

learn to code dart, tard

else your opinion on how others code means shit

reply

I don't have to learn to code. And I do not want to.
But I like and want to pay for PoW not for LLM.

reply
0 sats \ 0 replies \ @yfaming 9h

Starting from 2023, we have just entered the fourth year of the AI era. What lies ahead is still unknown.

Before 2025, AI was like a talking library. In 2025, AI grew a pair of hands—it can now type on a keyboard. And 2026?

I believe the paradigm of software development will undergo a dramatic transformation, but no one truly knows what it will look like.

reply
0 sats \ 1 reply \ @Aeneas 7 Jan

Right now I'm waiting for a cybersecurity catastrophe to happen caused by vibe coding. It has to be something really bad, in a really critical system, that scares the living shit out of people. That's the only way for it to stop.

reply

It's only a matter of time. But even after that, vibe coding won't stop. There will just be more checks put in place.

reply

I view AI coding agents as competent employees working for you, whom you can delegate certain coding tasks to.

Can they make mistakes? Yes. But you, as their manager, are responsible for having solid mechanisms in place to proactively prevent the errors, and retrospectively catch them. There's still work involved, it's just different.

This is my personal experience.

reply

so a bus driver also can call himself a F1 driver, right ?

reply

I don't think I understand your analogy. Please explain.

reply

ask shitGPT

reply

Luddite!

reply

probably for reasons similar to how master sculptors would get credited for anything that was produced by their studio, even if most of the actual chipping away was done by an apprentice.

reply
if most of current apps are made with all these AI prompts, why we still call the guys managing the AI prompts "app devs"? Why not calling them prompters or PMs (project managers).

Words and names are supposed to help us navigate the world.

If someone creates app (regardless of tools used) I don't see why 'project manager' would be a better name than 'app developer'. Or perhaps better name would be 'app creator'?

reply