pull down to refresh
301 sats \ 2 replies \ @k00b 16 Nov 2023 \ on: AMA - Building a "Bitcoin AI" bitcoin
Many people seem to think that ChatGPT will be the google of AI, that there will be an AI model monopoly. Implicitly these people think there isn't much need for narrow intelligence if you have sufficiently powerful general intelligence. Spirit of Satoshi seems like a bet against this.
Can you share how you arrived at the conclusion that there won't be an AI model monopoly? What does the future of AI look like to you?
Just basic physics. The more general you make something, the less specialised it can be.
Google is a good example, but so are reddit and forums.
Google might give you a good high level overview on something, but when you want to go deep into a topic or subject, you go to reddit or a forum.
Same thing will happen with models.
You'll have mainstream general ones that are "good enough" with general stuff.
Then when you want to go deep on a topic (especially those outside of mainstream, ie; Bitcoin, self defence, home schooling, alternative health, etc) you will want to seek out a domain-specific language model.
We will look to make the method we've used to train Satoshi and the training app (train.spiritofsatoshi.ai) available for other domains.
reply
"The more general you make something, the less specialised it can be."
... unless it has no memory or processing constraints... right?
Why couldn't a highly specialized self-defense LLM be given a new dataset for home-schooling so that it becomes highly specialized in both? And after that, why wouldn't it be the same with infinite new specialized datasets on the same LLM?
I would think that being highly specialized in (N+1) datasets would make it even more specialized in each specialization overall, because it has multiple domains with which to draw conclusions.... what humans call "wisdom".
This is different for humans because it takes us so long to become specialized in one area, and we have short lifespans. Also, our limited memory and processing power. We seem to forget things that aren't used.
But LLM's don't have any of these limitations. So why can't they be highly specialized in seemingly everything? (srsly trying to learn, not trying to be condescending. I'm actually very curious about this)
reply