Abstract
The paper explores the idea that causality-based probability judgments are determined by two competing drives: one towards veridicality and one towards effort reduction. Participants were taught the causal structure of novel categories and asked to make predictive and diagnostic probability judgments about the features of category exemplars. We found that participants violated the predictions of a normative causal Bayesian network model because they ignored relevant variables (Experiments 1-3) and because they failed to integrate over hidden variables (Experiment 2). When the task was made easier by stating whether alternative causes were present or absent as opposed to uncertain, judgments approximated the normative predictions (Experiment 3). We conclude that augmenting the popular causal Bayes net computational framework with cognitive shortcuts that reduce processing demands can provide a more complete account of causal inference.
Original language | English (US) |
---|---|
Pages (from-to) | 64-88 |
Number of pages | 25 |
Journal | Argument and Computation |
Volume | 4 |
Issue number | 1 |
DOIs | |
State | Published - Mar 1 2013 |
Keywords
- cognitive science<interdisciplinary links with computational argument
- computational accounts of probabilistic argument
- conditionals<interdisciplinary links with computational argument
- explanation
- mental models<interdisciplinary links with computational argument
- rationality<interdisciplinary links with computational argument
ASJC Scopus subject areas
- Linguistics and Language
- Computer Science Applications
- Computational Mathematics
- Artificial Intelligence