Certain grammatical rules never appear in any known language. By constructing artificial languages that have these rules, linguists can use neural networks to explore how people learn.Learning a language can’t be that hard — every baby in the world manages to do it in a few years. Figuring out how the process works is another story. Linguists have devised elaborate theories to explain it, but recent advances in machine learning have added a new wrinkle. When computer scientists began building the language models that power modern chatbots like ChatGPT, they set aside decades of research in linguistics, and their gamble seemed to pay off. But are their creations really learning?“Even if they do something that looks like what a human does, they might be doing it for very different reasons,” said Tal Linzen(opens a new tab), a computational linguist at New York University.It’s not just a matter of quibbling about definitions. If language models really are learning language, researchers may need new theories to explain how they do it. But if the models are doing something more superficial, then perhaps machine learning has no insights to offer linguistics.
pull down to refresh
related posts