Decision theory, reinforcement learning, and the brain

Peter Dayan, Nathaniel D. Daw

Research output: Contribution to journalReview articlepeer-review

Abstract

Decision making is a core competence for animals and humans acting and surviving in environments they only partially comprehend, gaining rewards and punishments for their troubles. Decision-theoretic concepts permeate experiments and computational models in ethology, psychology, and neuroscience. Here, we review a well-known, coherent Bayesian approach to decision making, showing how it unifies issues in Markovian decision problems, signal detection psychophysics, sequential sampling, and optimal exploration and discuss paradigmatic psychological and neural examples of each problem. We discuss computational issues concerning what subjects know about their task and how ambitious they are in seeking opti mal solutions; we address algorithmic topics concerning model-based and model-free methods for making choices; and we highlight key aspects of the neural implementation of decision making.

Original languageEnglish (US)
Pages (from-to)429-453
Number of pages25
JournalCognitive, Affective and Behavioral Neuroscience
Volume8
Issue number4
DOIs
StatePublished - Dec 2008

ASJC Scopus subject areas

  • Cognitive Neuroscience
  • Behavioral Neuroscience

Fingerprint Dive into the research topics of 'Decision theory, reinforcement learning, and the brain'. Together they form a unique fingerprint.

Cite this