Grounded language acquisition through the eyes and ears of a single child

Wai Keen Vong, Wentao Wang, A. Emin Orhan, Brenden M. Lake

Research output: Contribution to journalArticlepeer-review


Starting around 6 to 9 months of age, children begin acquiring their first words, linking spoken words to their visual counterparts. How much of this knowledge is learnable from sensory input with relatively generic learning mechanisms, and how much requires stronger inductive biases? Using longitudinal head-mounted camera recordings from one child aged 6 to 25 months, we trained a relatively generic neural network on 61 hours of correlated visual-linguistic data streams, learning feature-based representations and cross-modal associations. Our model acquires many word-referent mappings present in the child’s everyday experience, enables zero-shot generalization to new visual referents, and aligns its visual and linguistic conceptual systems. These results show how critical aspects of grounded word meaning are learnable through joint representation and associative learning from one child’s input.

Original languageEnglish (US)
Pages (from-to)504-511
Number of pages8
Issue number6682
StatePublished - Feb 20 2024

ASJC Scopus subject areas

  • General


Dive into the research topics of 'Grounded language acquisition through the eyes and ears of a single child'. Together they form a unique fingerprint.

Cite this