Temporal modulations in speech and music

Nai Ding, Aniruddh D. Patel, Lin Chen, Henry Butler, Cheng Luo, David Poeppel

Research output: Contribution to journalReview articlepeer-review

Abstract

Speech and music have structured rhythms. Here we discuss a major acoustic correlate of spoken and musical rhythms, the slow (0.25–32 Hz) temporal modulations in sound intensity and compare the modulation properties of speech and music. We analyze these modulations using over 25 h of speech and over 39 h of recordings of Western music. We show that the speech modulation spectrum is highly consistent across 9 languages (including languages with typologically different rhythmic characteristics). A different, but similarly consistent modulation spectrum is observed for music, including classical music played by single instruments of different types, symphonic, jazz, and rock. The temporal modulations of speech and music show broad but well-separated peaks around 5 and 2 Hz, respectively. These acoustically dominant time scales may be intrinsic features of speech and music, a possibility which should be investigated using more culturally diverse samples in each domain. Distinct modulation timescales for speech and music could facilitate their perceptual analysis and its neural processing.

Original languageEnglish (US)
Pages (from-to)181-187
Number of pages7
JournalNeuroscience and Biobehavioral Reviews
Volume81
DOIs
StatePublished - Oct 2017

Keywords

  • Modulation spectrum
  • Music
  • Rhythm
  • Speech
  • Temporal modulations

ASJC Scopus subject areas

  • Neuropsychology and Physiological Psychology
  • Cognitive Neuroscience
  • Behavioral Neuroscience

Fingerprint

Dive into the research topics of 'Temporal modulations in speech and music'. Together they form a unique fingerprint.

Cite this