pull down to refresh
1546 sats \ 0 replies \ @elvismercury OP 5 Jan \ parent \ on: LLMs and SN, redux meta
Good comment, good objections.
Because generating horseshit at 100000x velocity renders intolerable things that could be tolerated in lesser doses.
Yes and no. Yes, in theory, the standard for an LLM-generated thing ought to be the same standard as for a human-generated thing: is it useful, entertaining, or whatever you're looking for. In practice, nobody behaves this way for almost anything in real life. In the same way that the purpose of sex is not simply orgasm, but some kind of connection with another being, the purpose of most utterances is not restricted to the truth value of their postulates. Something more is both sought for and implied when we communicate with each other. Astroturfing w/ AI "content" violates that implicit agreement.
A less fluffy refutation is that most human concerns (e.g., things that are not purely technical; but even some things that are purely technical) are crucially interlaced with tacit knowledge that the speaker doesn't even know that she possesses. In other words: a real person talking about a non-trivial real-life experience brings in experience that would be hard to describe or enumerate but that crucially informs the interaction. The absence of these illegible elements is harder to detect than whether some program compiles or not, but it matters. (And note, this is true of even the hardest of engineering disciplines. The research on tacit knowledge / expertise is clear on that account.)
See above; but also: the truth, the way we usually use the term, is not the only thing at issue.