Olá @Svetski ! Here is a slightly provocative question, but without any malice. I have a question about the potential bias of a large language model trained on a dataset of Bitcoin-related text.
yes - it will definitely have a bias.
Language models are a mirror of the data they're trained on. ALL data has an inherent Bias.
A bias is simply a "model of the world", or an "opinion" or a "worldview". Everything that's ever been written, has a worldview (or bias).
Therefore, every LLM ever built will have....you guessed it...a Bias!
The key is not to try and remove bias (impossible) but to have MNAY models, each with their own biases.
As Bitcoiners, our bias, and model of the world is different to the mainstream, fiat model of the world.
As a result, as the model is trained on more and more Bitcoin data, it will represent, more and more, some "aggregation" of the bitcoin model of the world.
Does that make sense? Let me know if I can clarify further :)
reply
The key is not to try and remove bias (impossible) but to have MNAY models, each with their own biases. As Bitcoiners, our bias, and model of the world is different to the mainstream, fiat model of the world.
👍 💯
reply