Detection of sounds in the auditory stream: Event-related fMRI evidence for differential activation to speech and nonspeech

Athena Vouloumanos, Kent A. Kiehl, Janet F. Werker, Peter F. Liddle

Research output: Contribution to journalArticlepeer-review

Abstract

The detection of speech in an auditory stream is a requisite first step in processing spoken language. In this study, we used event-related fMRI to investigate the neural substrates mediating detection of speech compared with that of nonspeech auditory stimuli. Unlike previous studies addressing this issue, we contrasted speech with nonspeech analogues that were matched along key temporal and spectral dimensions. In an oddball detection task, listeners heard nonsense speech sounds, matched sine wave analogues (complex nonspeech), or single tones (simple nonspeech). Speech stimuli elicited significantly greater activation than both complex and simple nonspeech stimuli in classic receptive language areas, namely the middle temporal gyri bilaterally and in a locus lateralized to the left posterior superior temporal gyrus. In addition, speech activated a small cluster of the right inferior frontal gyrus. The activation of these areas in a simple detection task, which requires neither identification nor linguistic analysis, suggests they play a fundamental role in speech processing.

Original languageEnglish (US)
Pages (from-to)994-1005
Number of pages12
JournalJournal of Cognitive Neuroscience
Volume13
Issue number7
DOIs
StatePublished - 2001

ASJC Scopus subject areas

  • Cognitive Neuroscience

Fingerprint

Dive into the research topics of 'Detection of sounds in the auditory stream: Event-related fMRI evidence for differential activation to speech and nonspeech'. Together they form a unique fingerprint.

Cite this