pull down to refresh

This is an extremely informative post. Thank you.
However, allow me to play devils advocate. Was it not long believed by deep learning skeptics that machines could never master language through big data alone because language refers to external things which the machines do not access to. In other words, humans don’t learn language by relating words to words. They learn language by relating words to things, to feelings, to actions, etc. Could one say that LLMs also lack access to that right kind of data, yet nevertheless, still managed to become quite proficient with language? Could that mean that with enough visual data, a robot could become dexterous as a human just as llms became sufficient with language even though they lack human senses and perceptions?

Yes, technological development is often underestimated, and people working within a field have a certain set of assumptions about it that can be upended by new developments. In the case of LLMs, it was advancements in the mathematics of tensors that allowed for associations between words to be modelled effectively at scale. The inventor of the Roomba could very well be in this category, operating from outdated assumptions.

Chinese robotics seem to be very advanced, capable of dexterity and sensitivity beyond what Dr. Brooks suggests is possible.

reply