TY - JOUR
T1 - People Infer Recursive Visual Concepts from Just a Few Examples
AU - Lake, Brenden M.
AU - Piantadosi, Steven T.
N1 - Funding Information:
We gratefully acknowledge support from the Moore-Sloan Data Science Environment. We thank Philip Johnson-Laird and Sangeet Khemlani for helpful comments and suggestions, and Neil Bramley for providing comments on a preliminary draft.
Publisher Copyright:
© 2019, Society for Mathematical Psychology.
PY - 2020/3
Y1 - 2020/3
N2 - Machine learning has made major advances in categorizing objects in images, yet the best algorithms miss important aspects of how people learn and think about categories. People can learn richer concepts from fewer examples, including causal models that explain how members of a category are formed. Here, we explore the limits of this human ability to infer causal “programs”—latent generating processes with nontrivial algorithmic properties—from one, two, or three visual examples. People were asked to extrapolate the programs in several ways, for both classifying and generating new examples. As a theory of these inductive abilities, we present a Bayesian program learning model that searches the space of programs for the best explanation of the observations. Although variable, people’s judgments are broadly consistent with the model and inconsistent with several alternatives, including a pretrained deep neural network for object recognition, indicating that people can learn and reason with rich algorithmic abstractions from sparse input data.
AB - Machine learning has made major advances in categorizing objects in images, yet the best algorithms miss important aspects of how people learn and think about categories. People can learn richer concepts from fewer examples, including causal models that explain how members of a category are formed. Here, we explore the limits of this human ability to infer causal “programs”—latent generating processes with nontrivial algorithmic properties—from one, two, or three visual examples. People were asked to extrapolate the programs in several ways, for both classifying and generating new examples. As a theory of these inductive abilities, we present a Bayesian program learning model that searches the space of programs for the best explanation of the observations. Although variable, people’s judgments are broadly consistent with the model and inconsistent with several alternatives, including a pretrained deep neural network for object recognition, indicating that people can learn and reason with rich algorithmic abstractions from sparse input data.
KW - Bayesian modeling
KW - Concept learning
KW - Program induction
KW - Recursion
UR - http://www.scopus.com/inward/record.url?scp=85112447621&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85112447621&partnerID=8YFLogxK
U2 - 10.1007/s42113-019-00053-y
DO - 10.1007/s42113-019-00053-y
M3 - Article
AN - SCOPUS:85112447621
VL - 3
SP - 54
EP - 65
JO - Computational Brain and Behavior
JF - Computational Brain and Behavior
SN - 2522-087X
IS - 1
ER -