Neural correlates of forward planning in a spatial decision task in humans

Dylan Alexander Simon, Nathaniel D. Daw

Research output: Contribution to journalArticlepeer-review


Although reinforcement learning (RL) theories have been influential in characterizing the mechanisms for reward-guided choice in the brain, the predominant temporal difference (TD) algorithm cannot explain many flexible or goal-directed actions that have been demonstrated behaviorally. We investigate such actions by contrasting an RL algorithm that is model based, in that it relies on learning a map or model of the task and planning within it, to traditional model-free TD learning. To distinguish these approaches in humans, we used functional magnetic resonance imaging in a continuous spatial navigation task, in which frequent changes to the layout of the maze forced subjects continually to relearn their favored routes, thereby exposing the RL mechanisms used. We sought evidence for the neural substrates of such mechanisms by comparing choice behavior and blood oxygen level-dependent (BOLD) signals to decision variables extracted from simulations of either algorithm. Both choices and value-related BOLD signals in striatum, although most often associated with TD learning, were better explained by the model-based theory. Furthermore, predecessor quantities for the model-based value computation were correlated with BOLD signals in the medial temporal lobe and frontal cortex. These results point to a significant extension of both the computational and anatomical substrates for RL in the brain.

Original languageEnglish (US)
Pages (from-to)5526-5539
Number of pages14
JournalJournal of Neuroscience
Issue number14
StatePublished - Apr 6 2011

ASJC Scopus subject areas

  • General Neuroscience


Dive into the research topics of 'Neural correlates of forward planning in a spatial decision task in humans'. Together they form a unique fingerprint.

Cite this