yes - it will definitely have a bias.
Language models are a mirror of the data they're trained on. ALL data has an inherent Bias.
A bias is simply a "model of the world", or an "opinion" or a "worldview". Everything that's ever been written, has a worldview (or bias).
Therefore, every LLM ever built will have....you guessed it...a Bias!
The key is not to try and remove bias (impossible) but to have MNAY models, each with their own biases.
As Bitcoiners, our bias, and model of the world is different to the mainstream, fiat model of the world.
As a result, as the model is trained on more and more Bitcoin data, it will represent, more and more, some "aggregation" of the bitcoin model of the world.
Does that make sense? Let me know if I can clarify further :)
The key is not to try and remove bias (impossible) but to have MNAY models, each with their own biases. As Bitcoiners, our bias, and model of the world is different to the mainstream, fiat model of the world.
šŸ‘ šŸ’Æ
reply