pull down to refresh

It could show a percentage of how much of a comment may have used an LLM to write their comment
110 sats \ 0 replies \ @random_ 4 Jan
Just increase the post costs.
One must consider botting incentives. It costs 1 sat to post, so ROI is one upvote (a very very low barrier to entry).
10 or 100 sats should help.
Still possible to "profit" from exceptional content, but a less attractive value proposition to bots.
reply
Its almost trivial to generate the response with an LLM, then modify it. Substitute words with synonyms, use additional statistical models to rephrase using different moods (imperative/declarative) or voices (active/passive), sprinkle in mispellings, bad punctuation and fragments/run-non sentences and boom you're no longer detectable as robot.
Could also train your own model that the APIs aren't themselves trained on the outputs of.
Using this API at scale will incur costs to SN and it just moves the goalpost for bots slightly.
But bots using SN actually generates fees for SN. If a bot makes good content, it will be rewarded. What does it matter if an algorithm shows you a post you like, vs. an algorithm making a post you like?
0 sats \ 0 replies \ @2 OP 4 Jan
I meant LLM in the title
reply