Do you know if any Nostr apps or relays are doing any sort of automatic or manual moderation to remove illegal content?
What happens when someone posts vile photos?
Relays can silence accounts, and users can mute them. Better tools for moderation are in the roadmap.
reply
I believe there are also some AI filters being run by relays.
reply
Probs get downvoted to oblivion by the freedom crowd. I saw some pretty horrific child related images on nostr and then have never really used it again since. Not sure how they fix it but it needs moderation as that shit is just not acceptable ever.
reply
Hard luck for whatever you saw on Nostr. But the last time I checked, Nostr was a decentralized system. What do you need by "it needs moderation"?
reply
I don’t want to see child porn when browsing an app. It’s a major problem and why I’m choosing to not use the decentralised platform.
reply
Well, good luck with using the centralized platforms.
reply
I've never seen any on nostr. I'm using Amethyst. It's probably possible to find I assume, but same is true for the internet in general so that's not an issue unique to nostr.
reply
I use it for a year and never saw anything like it.
I think the main relay are applying filtering already, but your might have found a relay who is not, certainly not a default one from one of the main app (Amethyst, Primal,...).
reply
If you're expecting the relays to curate you're going to have a bad time, decentralization means being your own curator.
That said its unclear how @yoshi would have come across such content without following or browsing the key that posted (or re-posted) it. Some clients may have suggested content, but I can't imagine that being in any of them, generally Nostr works only by downloading events you ask for. This is what people seem to like about it.
reply
were you on the "global" feed?
notes from the "following" feed is better as you get some sort of web of trust / reputation, because you consciously choose who you follow and that what they post is of your liking.
reply
Yeah thats fair enough, it doesn't change the fact I am one wrong click away from seeing something that just shouldn't be there.
reply
Strfry lets you pipe events through any moderation logic you want with a plug-in system
One such plugin:
reply
The only example of clientside filtering I'm aware of is in Amethyst - if X amount of your follows flagged something, they wouldn't show it.
I suspect most of the main relays are doing AI moderation. For illegal content, it probably works really well.
reply
Only AI i know of is text-only
reply
You haven't seen image feature detection? It's pretty common, e.g. "Does this image have a nipple in it?"
reply
I am aware of the concept, but not aware of whether/which services are being used for this, and/or whether these are things developed in-house or with other types of automation providers.
reply
112 sats \ 0 replies \ @k00b 25 Sep
Most are probably using cloud APIs. I've looked at AWS's before: https://aws.amazon.com/rekognition/content-moderation/. I'm sure there are a lot of them available.
It should be fairly easy to run their own models too. This nsfw detecting model is the most popular image classifier on hugging face: https://huggingface.co/Falconsai/nsfw_image_detection
reply
Do you need moderation in a communication protocol?
Why not choose another relays that shows the content you want to see? Or choose relays that ban the content you want to avoid...
reply
My thoughts exactly. It seems to me that content moderation should happen on layers higher up and probably preferably on the client side. Each can choose his own censorship policies and I don't see anything wrong with that. I just don't want other people making these decisions for me. I'll choose the filters and algoritmns myself thank you very much.
reply
Run your own nostr relay. Your relay = your rules
reply
Businesses have to follow laws.
(So do individuals, it's just harder to enforce at scale.)
Furthermore, most people actually don't want to host certain types of images.
reply
Try using paid relays.
reply
Depends on which jurisdiction one play with.
reply
Most social apps/relays serve most jurisdictions. Exposure is not limited to registration location.
reply
Well, if the country you live in is a superpower, you can actually ignore the laws of every country except the one you host in. That's why German post removal requests are ignored by Gab for example and why Twitter can sort of ignore Brazilian law (at least force Brazil to block on their end anyway)
reply
It seems to me content moderation should happen client-side. Each can choose his own set of filters and algoritmns to use. Nostr's architecture allows this kind of architecture, right?
reply
From a UX and ideal standpoint, yes, I agree, but it isn't that simple because of scaling challenges.
We need servers/hosts in order to scale anything many people want to use together, and the service of hosting is regulated in most countries.
Additionally, most hosts don't want to have to host some kinds of data, which means it could/should never make it to the client at all.
Automation on the client-side could also be a challenge, and you def dont want manual moderation for some images...
reply
I wasn't too deep into Nostr, but then dropped it...it's been at least a year...when for whatever reason there was just so much porn in my feed. The appearance of the porn was very sudden, too.
I'm ready to check it out again though.
reply
fishcake gets angry. People were offended by https://nos.social/tagr-bot
reply
I dont think social moderation meets legal requirements. Also, I think OpenAI only works for text, not images, video, etc.
reply
I think so, I'm not sure, they will have to somehow examine the images they produce (OpenAi). The legal part should be done by the relay and the moderation part by the app
reply