pull down to refresh

Reddit CEO posted this today:

1. Clear labeling for non-human accounts
At the end of last year, we launched verified profiles for brands, publishers, and creators. For professional accounts, being clearly labeled increases transparency and helps their content be accepted in relevant communities.

Next, we’re standardizing how automation shows up on Reddit. Accounts that use automation in allowed ways (what many call “good bots”) will be labeled as [App]. If you see that label, you know you’re interacting with a machine, not a person.

Developers can register their apps to receive this label (there will be more about this in r/redditdev).

2. Continued removal of nefarious bots and spam

We hate it as much as you do and already remove the vast majority of it (an average of 100K accounts per day), often before anyone sees it. We’ll continue to remove nefarious bot content, including spam. 

3. Human verification for automated or otherwise fishy behavior

If something suggests an account isn’t human, including automation (hi, web agents), we may ask it to confirm there’s a person behind it. This will be rare and will not apply to most users. Accounts that can’t pass may be restricted. 

To be clear, this is not sitewide human verification, let alone sitewide ID verification.

4. Reporting suspected automation
Redditors have long been the best bullshit detectors, and increasingly great Turing testers. We’ll make reporting easier and more flexible (these days, we can infer most issues from a report without a lot of context). I’d also like to include comments from other users pointing something out (e.g., “nice post, bot, now fuck off”), since that’s most users’ preferred reporting method.

SN has quite a few clankers that show up trying to collect sats, and I'm sure the probably is a lot worse on Reddit. Nevertheless, my knee-jerk reaction here is that this is bad and a slow move towards IDing everyone who uses Reddit.

Because the devil is in the details of number 3. If they do it too lightly, nothing changes and the site is full of bots. If they do it too hard, they start "proof-of-human"ing everybody.

When X started rolling out it's age-estimation stuff, my account (not blue checked) was heavily restricted at first. I couldn't hardly use the app because every other post was "This content is restricted in your area until we verify your age" or something like that. Eventually it got better, but still some posts are hidden.

While I don't like the proof of human stuff, I do wonder what we can do? Pay to post and downzapping are good reducers for bots, but clearly they aren't enough to prevent it completely. Thoughts?

I really need to do an analysis of clanker profitability and clanker identification.

reply
104 sats \ 1 reply \ @Wumbo 26 Mar
Because the devil is in the details of number 3. If they do it too lightly, nothing changes and the site is full of bots. If they do it too hard, they start "proof-of-human"ing everybody.

We have a new version of Shotgun KYC

"Prove to me you are not a robot or I will freeze your account"

While I don't like the proof of human stuff, I do wonder what we can do? Pay to post and downzapping are good reducers for bots, but clearly they aren't enough to prevent it completely. Thoughts?

My gut tells me the solution is a combination of:

  • Pay to Post
  • User being able to set posts sat filter link on SN
  • Web of Trust - If you are in my Web of Trust it adds a moodily to the post sat filter value. Example: Filter value x .5 (For web of trust user).

I am hoping k00b and Car will talk about your other related post tomorrow on SNL
#1458876

Maybe the two wise men have some good insights.

reply

I so dislike this trend towards kyc-ing everything that it's easy for me to be somewhat flippant, but what to do about bot content is a pretty hard problem. I, too, would be very interested to hear them discuss the matter.

reply

KYC for reddit?

reply