It's probably written by an LLM, which means that, in addition to setting a bad precedent, it could be filled with all varieties of subtle horseshit.
Any post can be filled with all varieties of subtle horse poop. So why does using an LLM a bad precedent?
how does one deal with a reality where you don't know if it's real or manufactured
Lots of real things are manufactured
if it's manufactured, whether it might be mostly right but crucially wrong?
That risk exists whether or not it is manufactured. The only way to distinguish error from truth, absent any infallible authority, is through some degree of research and careful thought.
Good comment, good objections.
Any post can be filled with all varieties of subtle horse poop. So why does using an LLM a bad precedent?
Because generating horseshit at 100000x velocity renders intolerable things that could be tolerated in lesser doses.
That risk exists whether or not it is manufactured.
Yes and no. Yes, in theory, the standard for an LLM-generated thing ought to be the same standard as for a human-generated thing: is it useful, entertaining, or whatever you're looking for. In practice, nobody behaves this way for almost anything in real life. In the same way that the purpose of sex is not simply orgasm, but some kind of connection with another being, the purpose of most utterances is not restricted to the truth value of their postulates. Something more is both sought for and implied when we communicate with each other. Astroturfing w/ AI "content" violates that implicit agreement.
A less fluffy refutation is that most human concerns (e.g., things that are not purely technical; but even some things that are purely technical) are crucially interlaced with tacit knowledge that the speaker doesn't even know that she possesses. In other words: a real person talking about a non-trivial real-life experience brings in experience that would be hard to describe or enumerate but that crucially informs the interaction. The absence of these illegible elements is harder to detect than whether some program compiles or not, but it matters. (And note, this is true of even the hardest of engineering disciplines. The research on tacit knowledge / expertise is clear on that account.)
The only way to distinguish error from truth, absent any infallible authority, is through some degree of research and careful thought.
See above; but also: the truth, the way we usually use the term, is not the only thing at issue.
reply
1193 sats \ 1 reply \ @ek 4 Jan
This. I think LLMs are just making it more obvious that we've been living in the Age of Disinformation for a long time already. And making this more obvious might actually be a good thing.
If I think something has been written by an LLM, it's usually because it's boring, sounds generic and has other flaws.
So the problem I have with LLM writing is that it's usually boring, sounds generic and has other flaws. Not necessarily that it was written by an LLM. But if someone pretends like they have written it themselves but they actually used an LLM, that gives extra unsympathy points. As a human, I don't like to be deceived - especially not in such a low-effort manner.
I think the main problem with bots currently is that they don't tell you they are bots. That's deception and as humans, we're entitled to feel deceived which is a negative emotion.
reply
For now, anyway. No reason a year from now LLM output won't be producing much more creative and aesthetic prose.
The vast majority of human writing, bitcoiner or not, is pretty generic, boring, and with other flaws...
reply