TY - GEN
T1 - Human few-shot learning of compositional instructions
AU - Lake, Brenden M.
AU - Linzen, Tal
AU - Baroni, Marco
N1 - Funding Information:
We thank the NYU ConCats group, Michael Frank, Kristina Gulordava, Germán Kruszewski, Roger Levy, and Adina Williams for helpful suggestions.
Publisher Copyright:
© Cognitive Science Society: Creativity + Cognition + Computation, CogSci 2019.All rights reserved.
PY - 2019
Y1 - 2019
N2 - People learn in fast and flexible ways that have not been emulated by machines. Once a person learns a new verb “dax,” he or she can effortlessly understand how to “dax twice,” “walk and dax,” or “dax vigorously.” There have been striking recent improvements in machine learning for natural language processing, yet the best algorithms require vast amounts of experience and struggle to generalize new concepts in compositional ways. To better understand these distinctively human abilities, we study the compositional skills of people through language-like instruction learning tasks. Our results show that people can learn and use novel functional concepts from very few examples (few-shot learning), successfully applying familiar functions to novel inputs. People can also compose concepts in complex ways that go beyond the provided demonstrations. Two additional experiments examined the assumptions and inductive biases that people make when solving these tasks, revealing three biases: mutual exclusivity, one-to-one mappings, and iconic concatenation. We discuss the implications for cognitive modeling and the potential for building machines with more human-like language learning capabilities.
AB - People learn in fast and flexible ways that have not been emulated by machines. Once a person learns a new verb “dax,” he or she can effortlessly understand how to “dax twice,” “walk and dax,” or “dax vigorously.” There have been striking recent improvements in machine learning for natural language processing, yet the best algorithms require vast amounts of experience and struggle to generalize new concepts in compositional ways. To better understand these distinctively human abilities, we study the compositional skills of people through language-like instruction learning tasks. Our results show that people can learn and use novel functional concepts from very few examples (few-shot learning), successfully applying familiar functions to novel inputs. People can also compose concepts in complex ways that go beyond the provided demonstrations. Two additional experiments examined the assumptions and inductive biases that people make when solving these tasks, revealing three biases: mutual exclusivity, one-to-one mappings, and iconic concatenation. We discuss the implications for cognitive modeling and the potential for building machines with more human-like language learning capabilities.
KW - compositionality
KW - concept learning
KW - neural networks
KW - word learning
UR - http://www.scopus.com/inward/record.url?scp=85091234961&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85091234961&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85091234961
T3 - Proceedings of the 41st Annual Meeting of the Cognitive Science Society: Creativity + Cognition + Computation, CogSci 2019
SP - 611
EP - 617
BT - Proceedings of the 41st Annual Meeting of the Cognitive Science Society
PB - The Cognitive Science Society
T2 - 41st Annual Meeting of the Cognitive Science Society: Creativity + Cognition + Computation, CogSci 2019
Y2 - 24 July 2019 through 27 July 2019
ER -