"...That’s because large language models are not actually built. They’re grown—or evolved, says Josh Batson, a research scientist at Anthropic.
It’s an apt metaphor. Most of the parameters in a model are values that are established automatically when it is trained, by a learning algorithm that is itself too complicated to follow. It’s like making a tree grow in a certain shape: You can steer it, but you have no control over the exact path the branches and leaves will take."
~lol