Abstract
Human-object and human-computer interactions take place over time, during which we take in sensory information, make predictions about the impact of our actions based on our goals, and then integrate the new sensory information in order to update our internal models and guide new actions. Here, I focus on two key aspects of interaction that unfold over time: (1) active vision using eye and body movements and (2) temporal windows and rhythms. Recent scientific research provides new insights into how we integration sensory input over time and how information processing speed varies over time and between individuals. Understanding these temporal parameters of how we perceive and act, and tailoring the experience to match individual differences in temporal processing, may dramatically improve the design of efficient and usable objects and interfaces. Failing to take these temporal factors into account can lead to the user being overwhelmed or confused, or failing to notice important information. These issues are illustrated with the example of the challenge of adapting automatic driver assistance technologies to older drivers.
Original language | English (US) |
---|---|
Pages (from-to) | 126-133 |
Number of pages | 8 |
Journal | diid disegno industriale industrial design |
Volume | 74 |
DOIs | |
State | Published - Dec 16 2021 |
Keywords
- Interactive vision, Temporal processing, Active vision, Aging, Advanced Driver Assistance Systems