Abstract
In this article we review current literature on cross-modal recognition and present new findings from our studies on object and scene recognition. Specifically, we address the questions of what is the nature of the representation underlying each sensory system that facilitates convergence across the senses and how perception is modified by the interaction of the senses. In the first set of our experiments, the recognition of unfamiliar objects within and across the visual and haptic modalities was investigated under conditions of changes in orientation (0°or 180°). An orientation change increased recognition errors within each modality but this effect was reduced across modalities. Our results suggest that cross-modal object representations of objects are mediated by surface-dependent representations. In a second series of experiments, we investigated how spatial information is integrated across modalities and viewpoint using scenes of familiar, 3D objects as stimuli. We found that scene recognition performance was less efficient when there was either a change in modality, or in orientation, between learning and test. Furthermore, haptic learning was selectively disrupted by a verbal interpolation task. Our findings are discussed with reference to separate spatial encoding of visual and haptic scenes. We conclude by discussing a number of constraints under which cross-modal integration is optimal for object recognition. These constraints include the nature of the task, and the amount of spatial and temporal congruency of information across the modalities.
Original language | English (US) |
---|---|
Pages (from-to) | 147-159 |
Number of pages | 13 |
Journal | Journal of Physiology Paris |
Volume | 98 |
Issue number | 1-3 SPEC. ISS. |
DOIs | |
State | Published - 2004 |
Keywords
- Cross-modal
- Haptics
- Object recognition
- Scene recognition
- Vision
ASJC Scopus subject areas
- General Neuroscience
- Physiology (medical)