Good comment, good objections.
Any post can be filled with all varieties of subtle horse poop. So why does using an LLM a bad precedent?
Because generating horseshit at 100000x velocity renders intolerable things that could be tolerated in lesser doses.
That risk exists whether or not it is manufactured.
Yes and no. Yes, in theory, the standard for an LLM-generated thing ought to be the same standard as for a human-generated thing: is it useful, entertaining, or whatever you're looking for. In practice, nobody behaves this way for almost anything in real life. In the same way that the purpose of sex is not simply orgasm, but some kind of connection with another being, the purpose of most utterances is not restricted to the truth value of their postulates. Something more is both sought for and implied when we communicate with each other. Astroturfing w/ AI "content" violates that implicit agreement.
A less fluffy refutation is that most human concerns (e.g., things that are not purely technical; but even some things that are purely technical) are crucially interlaced with tacit knowledge that the speaker doesn't even know that she possesses. In other words: a real person talking about a non-trivial real-life experience brings in experience that would be hard to describe or enumerate but that crucially informs the interaction. The absence of these illegible elements is harder to detect than whether some program compiles or not, but it matters. (And note, this is true of even the hardest of engineering disciplines. The research on tacit knowledge / expertise is clear on that account.)
The only way to distinguish error from truth, absent any infallible authority, is through some degree of research and careful thought.
See above; but also: the truth, the way we usually use the term, is not the only thing at issue.