Interactive Vision: How Active Perceivers Sample Information Over Time

David Melcher

Research output: Contribution to journalArticlepeer-review

Abstract

Human-object and human-computer interactions take place over time, during which we take in sensory information, make predictions about the impact of our actions based on our goals, and then integrate the new sensory information in order to update our internal models and guide new actions. Here, I focus on two key aspects of interaction that unfold over time: (1) active vision using eye and body movements and (2) temporal windows and rhythms. Recent scientific research provides new insights into how we integration sensory input over time and how information processing speed varies over time and between individuals. Understanding these temporal parameters of how we perceive and act, and tailoring the experience to match individual differences in temporal processing, may dramatically improve the design of efficient and usable objects and interfaces. Failing to take these temporal factors into account can lead to the user being overwhelmed or confused, or failing to notice important information. These issues are illustrated with the example of the challenge of adapting automatic driver assistance technologies to older drivers.
Original languageEnglish (US)
Pages (from-to)126-133
Number of pages8
Journaldiid disegno industriale industrial design
Volume74
DOIs
StatePublished - Dec 16 2021

Keywords

  • Interactive vision, Temporal processing, Active vision, Aging, Advanced Driver Assistance Systems

Fingerprint

Dive into the research topics of 'Interactive Vision: How Active Perceivers Sample Information Over Time'. Together they form a unique fingerprint.

Cite this