Dynamic estimation of task-relevant variance in movement under risk

Michael S. Landy, Julia Trommershäuser, Nathaniel D. Daw

Research output: Contribution to journalArticlepeer-review


Humanstake into account theirownmovement variability as well as potential consequences of different movement outcomes in planning movement trajectories. When variability increases, planned movements are altered so as to optimize expected consequences of the movement. Past research has focused on the steady-state responses to changing conditions of movement under risk. Here, we study the dynamics of such strategy adjustment in a visuomotor decision task in which subjects reach toward a display with regions that lead to rewards and penalties, under conditions of changing uncertainty. In typical reinforcement learning tasks, subjects should base subsequent strategy by computing an estimate of the mean outcome (e.g., reward) in recent trials. In contrast, in our task, strategy should be based on a dynamic estimate of recent outcome uncertainty (i.e., squared error). We find that subjects respond to increased movement uncertainty by aiming movements more conservatively with respect to penalty regions, and that the estimate of uncertainty they use is well characterized by a weighted average of recent squared errors, with higher weights given to more recent trials.

Original languageEnglish (US)
Pages (from-to)12702-12711
Number of pages10
JournalJournal of Neuroscience
Issue number37
StatePublished - Sep 12 2012

ASJC Scopus subject areas

  • General Neuroscience


Dive into the research topics of 'Dynamic estimation of task-relevant variance in movement under risk'. Together they form a unique fingerprint.

Cite this