pull down to refresh

In the spirit of the new year, Gurwinder posted a list of 15 "useful concepts" with cute names, which was really a shortened version of his full list of 26 useful concepts (surely also with cute names), but you don't have to read all of them.

Cutely-named useful concept no 2 was this:

  1. Slopaganda: More online articles are written by AI than by humans. And AI is now better at persuading people than most people are. Who wins in a world of unlimited propaganda? Not those with the best arguments, but those with the most slop.

For some reason hearing it put this way clicked for me: social media is turning into an eclipse attack.

In Bitcoin, an eclipse attack is where an attacker tries to become the operator of every node to which your node is connected. An attacker who successfully eclipses another node can prevent you from learning about new blocks, receiving transactions, and even sending transactions. And as long as your node doesn't realize it is eclipsed, it will seem like everything is working just fine.

Gurwinder's point about slopaganda really struck me: perhaps the danger with LLM-generated content (slop) is not that we won't be able to tell the difference, nor even that we will waste time evaluating it, but rather that it makes it possible to so deluge us, that we get eclipsed by the slop, and never learn about reality.

Slop is cheap to make. If there are a billion slop posts on your favorite social media platform, you might still see some real things. But what if there are a trillion? It is surely the case that we won't be able to block all slop and that some percentage will get through to clutter up our online spaces. There is a number at which even the small percentage getting through is enough to eclipse us.

202 sats \ 0 replies \ @optimism 11h

I don't think LLM slop is an eclipse attack, I think social media in general is. The signal-to-noise ratio on Twitter has been bad for a while now, started in 2019 or so, I think, even before Covid. It got worse over time I think, especially when I started to get all the Elon-fanboism in the algo feed (I didn't even use the home feed and it was still bothering me a lot every time I made the mistake of checking that.)

reply

I think we already got eclipsed, even without the use of AI slop. Covid showed me that. AI slop may accelerate it though

reply

True. Intelligence begins to take on artificial aspects at the size of the smallest conspiracy.

I'm starting to see independence as a defining principle for people going forward. Always has been but LLMs and slop represent a significant test.

It's never been so cheap to create tailored stupidity that sounds smart. Hopefully sounding smart loses value as a proxy for being correct.

reply
202 sats \ 0 replies \ @freetx 11h

There are tangential issues as well....we are probably already at the point where training LLMs on "non-slop" inputs is becoming difficult, at a certain point it will become impossible. No one is really clear whats going to happen to model quality at that point (models become obsessively focused on emoji and emdash communication?)

Secondly, I think of most of this like a "2nd Amendment" issue. We will all need our own open-source / self-hosted LLMs to help combat the big-gov / corp AI onslaught.

Directly reading the internet might become too difficult in say 10 years, instead your personal LLM will read it for you, to help strip out the obvious agenda-driven slop and present you with a more grounded take.

reply