The effect of combined sensory and semantic components on audio-visual speech perception in older adults

Corrina Maguinness, Annalisa Setti, Kate E. Burke, Rose Anne Kenny, Fiona N. Newell

Research output: Contribution to journalArticlepeer-review

Abstract

Previous studies have found that perception in older people benefits from multisensory over unisensory information. As normal speech recognition is affected by both the auditory input and the visual lip movements of the speaker, we investigated the efficiency of audio and visual integration in an older population by manipulating the relative reliability of the auditory and visual information in speech. We also investigated the role of the semantic context of the sentence to assess whether audio-visual integration is affected by top-down semantic processing. We presented participants with audio-visual sentences in which the visual component was either blurred or not blurred. We found that there was a greater cost in recall performance for semantically meaningless speech in the audio-visual 'blur' compared to audio-visual 'no blur' condition and this effect was specific to the older group. Our findings have implications for understanding how aging affects efficient multisensory integration for the perception of speech and suggests that multisensory inputs may benefit speech perception in older adults when the semantic content of the speech is unpredictable.

Original languageEnglish (US)
Pages (from-to)1-9
Number of pages9
JournalFrontiers in Aging Neuroscience
Volume3
Issue numberDEC
DOIs
StatePublished - 2011

Keywords

  • Aging
  • Audio-visual
  • Cross-modal
  • Multisensory
  • Semantics
  • Speech perception
  • Top-down

ASJC Scopus subject areas

  • Aging
  • Cognitive Neuroscience

Fingerprint

Dive into the research topics of 'The effect of combined sensory and semantic components on audio-visual speech perception in older adults'. Together they form a unique fingerprint.

Cite this