Thanks for the thoughtful write-up. I don't know enough about the trust score (where can I learn more, btw?) to really comment intelligently on it.
But the thing I care about right now, that caused me to write the original post, is the obvious bots. The post I made yesterday (#619397) was about an obvious bot, that managed to get some sats and was cluttering up decent posts. I would consider getting rid of obvious bots as being a "common good".
For downzapping and muting users that are actual humans - that seems different to me. I may not like their opinions, or think they post too many low value comments - that's a lot more individual. I may want to mute them, but I wouldn't necessarily want to penalize the original poster for it.
So for me, downzapping one individual post doesn't seem very valuable. It's seeing a cruddy post, looking at the user, and seeing AI generated junk, and then deciding the user is a bot or trash user - that's what would be most valuable.
Taking that as my starting point, I would suggest
  • Getting rid of downzapping
  • Keeping muting (free), as is
  • Doing a "10x mute" where you can pay money to mute a user, and that would be a signal to lower the users trust score. And this could be rewarded, kind of what you were suggesting for downzapping.
And I also think doing nothing is an okay strategy, because the current situation isn't that bad.
I just went onto Twitter the other day, and a large percentage of the comments - even on high quality posts - are either junk or useless "me too" comments. Stacker news is so much better.
Downzapping reduces the reach of a post and users who have lots of outlawed content have heavily reduced visibility on their future posts. That sort of seems like what you want.
I don't think reducing their trust score would reduce their reach, at least not directly. What trust does is affect their ability to influence the visibility of other posts.
I share your desire to only wield these powers against bots and intentional scammers.
reply