Combining sensory information to improve visualization

Marc Ernst, Martin Banks, Felix Wichmann, Laurence Maloney, Heinrich Buelthoff

Research output: Contribution to conferencePaperpeer-review

Abstract

Seemingly effortlessly the human brain reconstructs the three-dimensional environment surrounding us from the light pattern striking the eyes. This seems to be true across almost all viewing and lighting conditions. One important factor for this apparent easiness is the redundancy of information provided by the sensory organs. For example, perspective distortions, shading, motion parallax, or the disparity between the two eyesí images are all, at least partly, redundant signals which provide us with information about the three-dimensional layout of the visual scene. Our brain uses all these different sensory signals and combines the available information into a coherent percept. In displays visualizing data, however, the information is often highly reduced and abstracted, which may lead to an altered perception and therefore a misinterpretation of the visualized data. In this panel we will discuss mechanisms involved in the combination of sensory information and their implications for simulations using computer displays, as well as problems resulting from current display technology such as cathode-ray tubes.

Original languageEnglish (US)
Pages571-574
Number of pages4
StatePublished - 2002
EventVIS 2002, IEEE Visualisation 2002 - Boston, MA, United States
Duration: Oct 27 2002Nov 1 2002

Other

OtherVIS 2002, IEEE Visualisation 2002
Country/TerritoryUnited States
CityBoston, MA
Period10/27/0211/1/02

ASJC Scopus subject areas

  • Software
  • General Computer Science
  • General Engineering
  • Computer Graphics and Computer-Aided Design

Fingerprint

Dive into the research topics of 'Combining sensory information to improve visualization'. Together they form a unique fingerprint.

Cite this