Use of cues in virtual reality depends on visual feedback

Jacqueline M. Fulvio, Bas Rokers

Research output: Contribution to journalArticlepeer-review


3D motion perception is of central importance to daily life. However, when tested in laboratory settings, sensitivity to 3D motion signals is found to be poor, leading to the view that heuristics and prior assumptions are critical for 3D motion perception. Here we explore an alternative: Sensitivity to 3D motion signals is context-dependent and must be learned based on explicit visual feedback in novel environments. The need for action-contingent visual feedback is well-established in the developmental literature. For example, young kittens that are passively moved through an environment, but unable to move through it themselves, fail to develop accurate depth perception. We find that these principles also obtain in adult human perception. Observers that do not experience visual consequences of their actions fail to develop accurate 3D motion perception in a virtual reality environment, even after prolonged exposure. By contrast, observers that experience the consequences of their actions improve performance based on available sensory cues to 3D motion. Specifically, we find that observers learn to exploit the small motion parallax cues provided by head jitter. Our findings advance understanding of human 3D motion processing and form a foundation for future study of perception in virtual and natural 3D environments.

Original languageEnglish (US)
Article number16009
JournalScientific reports
Issue number1
StatePublished - Dec 1 2017

ASJC Scopus subject areas

  • General


Dive into the research topics of 'Use of cues in virtual reality depends on visual feedback'. Together they form a unique fingerprint.

Cite this