Abstract
Seemingly effortlessly the human brain reconstructs the three-dimensional environment surrounding us from the light pattern striking the eyes. This seems to be true across almost all viewing and lighting conditions. One important factor for this apparent easiness is the redundancy of information provided by the sensory organs. For example, perspective distortions, shading, motion parallax, or the disparity between the two eyesí images are all, at least partly, redundant signals which provide us with information about the three-dimensional layout of the visual scene. Our brain uses all these different sensory signals and combines the available information into a coherent percept. In displays visualizing data, however, the information is often highly reduced and abstracted, which may lead to an altered perception and therefore a misinterpretation of the visualized data. In this panel we will discuss mechanisms involved in the combination of sensory information and their implications for simulations using computer displays, as well as problems resulting from current display technology such as cathode-ray tubes.
Original language | English (US) |
---|---|
Pages | 571-574 |
Number of pages | 4 |
State | Published - 2002 |
Event | VIS 2002, IEEE Visualisation 2002 - Boston, MA, United States Duration: Oct 27 2002 → Nov 1 2002 |
Other
Other | VIS 2002, IEEE Visualisation 2002 |
---|---|
Country/Territory | United States |
City | Boston, MA |
Period | 10/27/02 → 11/1/02 |
ASJC Scopus subject areas
- Software
- General Computer Science
- General Engineering
- Computer Graphics and Computer-Aided Design