I'm fascinated by the ways people are going to attempt to get their desired payload (an ad, a link, an idea) into other people's chat outputs. In this case, it seems that if a model hallucinates the name of a package, it may do so again, and if you happen to make available a real package using the hallucinated package's name, you might be able to get it downloaded.
Slopsquatting is a type of cybersquatting. It is the practice of registering a non-existent software package name that a large language model (LLM) may hallucinate in its output, whereby someone unknowingly may copy-paste and install the software package without realizing it is fake.[1] Attempting to install a non-existent package should result in an error, but some have exploited this for their gain in the form of typosquatting.[2]
Huh, that's pretty smart. I'm impressed by peoples' creativity in trying to make a buck.
Fun fact, ChatGPT almost got me to download a malicious software package: #943310. Good thing I was being careful!
impressed with scamming, you mean?
Been using socket.dev for automated checks on supply chain more generally
Also my cursor rules prohibit new libs, even though they're rarely malicious they're often unnecessary or even retarded
If I ever wanted to get into the malware business, this would be so awesome.
Dam I was hoping this was some kind of new fetish
It has that kind of ring to it. But let's be honest: can there ever be new fetishes?
This is a fascinating and slightly alarming insight into the intersection of AI and cybersecurity. Slopsquatting highlights a new kind of vulnerability that stems from our growing reliance on large language models. When people blindly trust AI-generated content especially in technical domains like coding they open the door to these subtle but dangerous attacks. The fact that attackers can now anticipate and exploit hallucinated package names to push malicious code is a clear sign that developers must remain vigilant. It also underscores the importance of critical thinking, verification, and not copy-pasting code or commands without understanding them. As AI tools continue to evolve, so will the tactics of those looking to exploit them. We need awareness and better safeguards, fast.
That's some next-level psyops right there. Feel free to download the software package Psyoptica - it automatically scans AI for hallucinated and fictitious software that could be used as malware or a trojan horse.
LOL, people can't let go of their habitual ways of thinking.