pull down to refresh

ok..
So a GPT wrapper is simply taking a bunch of data (say, your docs, a few books, whatever), chunking them up and then storing them in what's called a "Vector Database". Once you've done that, you have ChatGPT (or other LLM) reference the most relevant "chunk" from your vector database when someone asks a question.
This is kind of like inserting an example when you're using chatGPT. What you're essentially doing is inserting relevant context so when the "model" responds, it's more relevant.
It's a great, cheap, quick, "hacky" way to get a model to give more bitcoin-like responses, but it's not really a "Bitcoin model", nor has the model actually been "trained".
Where it falls over is when you ask anything that's not specifically in the vector store. Then you're basically back to normal chatGPT.
On the other hands, a model that's been trained (like Spirit of Satoshi), has actually had a TON of data transformed and formatted for training, then a bunch of GPU cycles spent on actually CHANGING the weights and biases of either a pre-existing open source model, or (the harder alternative) using all that data to do a ground-up "pre-training".
These are both very different processes to a "ChatGPT wrapper". The latter actually involve training a model, whereas the former uses another model with some reference material.
Training from scratch ultimately delivers the best possible result, but it takes WAY longer and is orders magnitude more expensive.
Hope that makes sense man!
got it, makes sense.
how do you think about building a moat around Spirit of Satoshi? how easy/hard do you think it will be for someone to replicate/compete with your model?
reply
The moat is in the quality of the model.
Honestly - it's been over 6 months of tinkering and fucking around with data structures, data styles, data mixes, training formats.
This whole process is more art than science. And unless you know Bitcoin very intimately, you're not going to compete.
And if you do, you need to really have a good grasp of training models. Which honestly - not many people do. We have 4 Data scientists who happen to be Bitcoiners too, working with us - and we've all been pulling our hair out.
So I'd say perhaps the outward facing "moat" will be the quality of the model, and the inward moat is the experience we've had (trial, error, experimentation).
That's hard to replicate.
reply