"The more general you make something, the less specialised it can be."
... unless it has no memory or processing constraints... right?
Why couldn't a highly specialized self-defense LLM be given a new dataset for home-schooling so that it becomes highly specialized in both? And after that, why wouldn't it be the same with infinite new specialized datasets on the same LLM?
I would think that being highly specialized in (N+1) datasets would make it even more specialized in each specialization overall, because it has multiple domains with which to draw conclusions.... what humans call "wisdom".
This is different for humans because it takes us so long to become specialized in one area, and we have short lifespans. Also, our limited memory and processing power. We seem to forget things that aren't used.
But LLM's don't have any of these limitations. So why can't they be highly specialized in seemingly everything? (srsly trying to learn, not trying to be condescending. I'm actually very curious about this)