TY - GEN
T1 - Grounding Compositional Hypothesis Generation in Specific Instances
AU - Bramley, Neil R.
AU - Rothe, Anselm
AU - Tenenbaum, Joshua B.
AU - Xu, Fei
AU - Gureckis, Todd M.
N1 - Funding Information:
Acknowledgments NB was supported by the Moore Sloan foundation, JT by ONR grant (N00014-13-1-0333), FX by an NSF grant (#1640816), and TG by NSF grant (BCS-1255538) and a John McDon-nell Foundation Scholar Award.
Funding Information:
NB was supported by the Moore Sloan foundation, JT by ONR grant (N00014-13-1-0333), FX by an NSF grant (#1640816), and TG by NSF grant (BCS-1255538) and a John McDonnell Foundation Scholar Award.
Publisher Copyright:
© 2018 Proceedings of the 40th Annual Meeting of the Cognitive Science Society, CogSci 2018. All rights reserved.
PY - 2018
Y1 - 2018
N2 - A number of recent computational models treat concept learning as a form of probabilistic rule induction in a space of language-like, compositional concepts. Inference in such models frequently requires repeatedly sampling from a (infinite) distribution over possible concept rules and comparing their relative likelihood in light of current data or evidence. However, we argue that most existing algorithms for top-down sampling are inefficient and cognitively implausible accounts of human hypothesis generation. As a result, we propose an alternative, Instance Driven Generator (IDG), that constructs bottom-up hypotheses directly out of encountered positive instances of a concept. Using a novel rule induction task based on the children's game Zendo, we compare these “bottom-up” and “top-down” approaches to inference. We find that the bottom-up IDG model accounts better for human inferences and results in a computationally more tractable inference mechanism for concept learning models based on a probabilistic language of thought.
AB - A number of recent computational models treat concept learning as a form of probabilistic rule induction in a space of language-like, compositional concepts. Inference in such models frequently requires repeatedly sampling from a (infinite) distribution over possible concept rules and comparing their relative likelihood in light of current data or evidence. However, we argue that most existing algorithms for top-down sampling are inefficient and cognitively implausible accounts of human hypothesis generation. As a result, we propose an alternative, Instance Driven Generator (IDG), that constructs bottom-up hypotheses directly out of encountered positive instances of a concept. Using a novel rule induction task based on the children's game Zendo, we compare these “bottom-up” and “top-down” approaches to inference. We find that the bottom-up IDG model accounts better for human inferences and results in a computationally more tractable inference mechanism for concept learning models based on a probabilistic language of thought.
KW - discovery
KW - hypothesis generation
KW - probabilistic language of thought, active learning
KW - program induction
UR - http://www.scopus.com/inward/record.url?scp=85098611993&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85098611993&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85098611993
T3 - Proceedings of the 40th Annual Meeting of the Cognitive Science Society, CogSci 2018
SP - 1390
EP - 1395
BT - Proceedings of the 40th Annual Meeting of the Cognitive Science Society, CogSci 2018
PB - The Cognitive Science Society
T2 - 40th Annual Meeting of the Cognitive Science Society: Changing Minds, CogSci 2018
Y2 - 25 July 2018 through 28 July 2018
ER -