pull down to refresh
0 sats \ 0 replies \ @Svetski OP 16 Nov 2023 \ parent \ on: AMA - Building a "Bitcoin AI" bitcoin
haha - yeah this is a tough one.
How the model answers it will be a little emergent from what it's been feed.
We will see.
Totally agree with him.
See my comments at the bottom of this thread.
Outsourcing has a price, which is a lack of muscle.
That's why less of us can do the things our forefathers did.
LLMs will likely lead people (who become dependant on them) toward diminishing faculties of thought.
I 100% agree with @DarthCoin
Just basic physics. The more general you make something, the less specialised it can be.
Google is a good example, but so are reddit and forums.
Google might give you a good high level overview on something, but when you want to go deep into a topic or subject, you go to reddit or a forum.
Same thing will happen with models.
You'll have mainstream general ones that are "good enough" with general stuff.
Then when you want to go deep on a topic (especially those outside of mainstream, ie; Bitcoin, self defence, home schooling, alternative health, etc) you will want to seek out a domain-specific language model.
We will look to make the method we've used to train Satoshi and the training app (train.spiritofsatoshi.ai) available for other domains.
yeah - that's a RAG model, kind of like what I described to @kr below.
It's just a general base model that references his data, maybe with a slight LoRa fine-tune.
I reached out to do something similar with my data, so I can see what they're doing in greater detail.
yeah. It's always a balance.
I use the google maps example alot.
When I was younger, I did door to door sales for about 6 months.
We had no iPhones back then. Just the big fat book with all the maps in it.
We had to learn and study those maps so we didn't get lost.
Yo could literally drop me anywhere (even in places I'd never been) and I could find my way around.
These days...Drop a millennial or younger into a new place without a phone and they'll die of starvation or thirst
In terms of what the world looks like..
Probably largely similar.
As I said to someone earlier - AI is a tool, like a computer. It exists to automate away some tasks and if use properly, create leverage for the user.
If you can further automate some of that leverage (by having the machine execute some economic tasks) you can increase the leverage, and ideally do more with less.
It's the been the same store with automation, tools and technology since the beginning of time
Mmm.
I actually have a contrarian viewpoint on this.
I think it will take a while for this to really happen at scale, because all of the current accounting system (especially with the big companies) is denominated in USD.
Doing machine to machine payments in sats will create an accounting nightmare for them - especially in the near term.
But...that doesn't mean we're not going there.
We are for sure.
I just think it's going to take alot longer than what some of us Bitcoiners think..
ooh.. Good question..
haha
- Shogun: James Clavell
- Musashi, by Eiji Yoshikawa
- The Book of Five Rings: Miyamoto Musashi
- The Virtues of War: Steven Pressfield
The moat is in the quality of the model.
Honestly - it's been over 6 months of tinkering and fucking around with data structures, data styles, data mixes, training formats.
This whole process is more art than science.
And unless you know Bitcoin very intimately, you're not going to compete.
And if you do, you need to really have a good grasp of training models. Which honestly - not many people do. We have 4 Data scientists who happen to be Bitcoiners too, working with us - and we've all been pulling our hair out.
So I'd say perhaps the outward facing "moat" will be the quality of the model, and the inward moat is the experience we've had (trial, error, experimentation).
That's hard to replicate.
🤣🤣🤣
One day...when there's enough data about my dumb ass, and I have nothing better to do with my life, I'll build a Ghost of Svetski.
Or maybe my kids will.
Until then, I'll try do more useful things..
yes - it will definitely have a bias.
Language models are a mirror of the data they're trained on.
ALL data has an inherent Bias.
A bias is simply a "model of the world", or an "opinion" or a "worldview".
Everything that's ever been written, has a worldview (or bias).
Therefore, every LLM ever built will have....you guessed it...a Bias!
The key is not to try and remove bias (impossible) but to have MNAY models, each with their own biases.
As Bitcoiners, our bias, and model of the world is different to the mainstream, fiat model of the world.
As a result, as the model is trained on more and more Bitcoin data, it will represent, more and more, some "aggregation" of the bitcoin model of the world.
Does that make sense?
Let me know if I can clarify further :)
So to clarify, in terms of benefit: "Automation and Leverage".
Same story, as with al other tools and tech over the millennia of mankind's existence
Honestly - it's like having a computer.
Like any other technology, "AI" is just a tool.
In fact, it should probably called "IA" (Intelligence Amplification), because when used intelligently, that's what it does (once again, like any other tool, or technology).
There's no real magic here. There's no "sentient being" that will come out of the circuits. All that "AGI" talk is just fairy tales from wierdos who wish they could create their own god. It's silly.
At the end of the day, it's a tool.
Use it where necessary and use it wisely, and you can do more work, in less time.
You can also use it stupidly (like how some people might use a hammer - hitting themselves in the head - or social media - doom scrolling all day) and you will make yourself more dumb (ie; never think for yourself, and just ask chatGPT everything)
ok..
So a GPT wrapper is simply taking a bunch of data (say, your docs, a few books, whatever), chunking them up and then storing them in what's called a "Vector Database". Once you've done that, you have ChatGPT (or other LLM) reference the most relevant "chunk" from your vector database when someone asks a question.
This is kind of like inserting an example when you're using chatGPT. What you're essentially doing is inserting relevant context so when the "model" responds, it's more relevant.
It's a great, cheap, quick, "hacky" way to get a model to give more bitcoin-like responses, but it's not really a "Bitcoin model", nor has the model actually been "trained".
Where it falls over is when you ask anything that's not specifically in the vector store.
Then you're basically back to normal chatGPT.
On the other hands, a model that's been trained (like Spirit of Satoshi), has actually had a TON of data transformed and formatted for training, then a bunch of GPU cycles spent on actually CHANGING the weights and biases of either a pre-existing open source model, or (the harder alternative) using all that data to do a ground-up "pre-training".
These are both very different processes to a "ChatGPT wrapper". The latter actually involve training a model, whereas the former uses another model with some reference material.
Training from scratch ultimately delivers the best possible result, but it takes WAY longer and is orders magnitude more expensive.
Hope that makes sense man!
GENESIS