Interaction between vision and speech in face recognition

Isabelle Bülthoff, Fiona N. Newell

Research output: Contribution to journalArticlepeer-review

Abstract

Many face studies have shown that in memory tasks, distinctive faces are more easily recognized than typical faces. All these studies were performed with visual information only. We investigated whether a cross-modal interaction between auditory and visual stimuli exists for face distinctiveness. Our experimental question was: Can visually typical faces become perceptually distinctive when they are accompanied by voice stimuli that are distinctive? In a training session, participants were presented with faces from two sets. In one set all faces were accompanied by characteristic auditory stimuli during learning (d-faces: different languages, intonations, accents, etc.). In the other set, all faces were accompanied by typical auditory stimuli during learning(s-faces: same words, same language). Face stimuli were counterbalanced across auditory conditions. We measured recognition performance in an old/new recognition task. Face recognition alone was tested. Our results show that participants were significantly better (t(12) = 3.89, p< 0.005) at recognizing d-faces than s-faces in the test session. These results show that there is an interaction between different sensory inputs and that typicality of stimuli in one modality can be modified by concomitantly presented stimuli in other sensory modalities.

Original languageEnglish (US)
Pages (from-to)825a
JournalJournal of vision
Volume3
Issue number9
DOIs
StatePublished - 2003

ASJC Scopus subject areas

  • Ophthalmology
  • Sensory Systems

Fingerprint

Dive into the research topics of 'Interaction between vision and speech in face recognition'. Together they form a unique fingerprint.

Cite this