pull down to refresh

This. I think LLMs are just making it more obvious that we've been living in the Age of Disinformation for a long time already. And making this more obvious might actually be a good thing.
If I think something has been written by an LLM, it's usually because it's boring, sounds generic and has other flaws.
So the problem I have with LLM writing is that it's usually boring, sounds generic and has other flaws. Not necessarily that it was written by an LLM. But if someone pretends like they have written it themselves but they actually used an LLM, that gives extra unsympathy points. As a human, I don't like to be deceived - especially not in such a low-effort manner.
I think the main problem with bots currently is that they don't tell you they are bots. That's deception and as humans, we're entitled to feel deceived which is a negative emotion.
For now, anyway. No reason a year from now LLM output won't be producing much more creative and aesthetic prose.
The vast majority of human writing, bitcoiner or not, is pretty generic, boring, and with other flaws...