Hierarchically nested networks optimize the analysis of audiovisual speech

Nikos Chalas, Diana Omigie, David Poeppel, Virginie van Wassenhove

Research output: Contribution to journalArticlepeer-review


In conversational settings, seeing the speaker's face elicits internal predictions about the upcoming acoustic utterance. Understanding how the listener's cortical dynamics tune to the temporal statistics of audiovisual (AV) speech is thus essential. Using magnetoencephalography, we explored how large-scale frequency-specific dynamics of human brain activity adapt to AV speech delays. First, we show that the amplitude of phase-locked responses parametrically decreases with natural AV speech synchrony, a pattern that is consistent with predictive coding. Second, we show that the temporal statistics of AV speech affect large-scale oscillatory networks at multiple spatial and temporal resolutions. We demonstrate a spatial nestedness of oscillatory networks during the processing of AV speech: these oscillatory hierarchies are such that high-frequency activity (beta, gamma) is contingent on the phase response of low-frequency (delta, theta) networks. Our findings suggest that the endogenous temporal multiplexing of speech processing confers adaptability within the temporal regimes that are essential for speech comprehension.

Original languageEnglish (US)
Article number106257
Issue number3
StatePublished - Mar 17 2023


  • Neuroscience
  • Sensory neuroscience
  • Signal processing

ASJC Scopus subject areas

  • General


Dive into the research topics of 'Hierarchically nested networks optimize the analysis of audiovisual speech'. Together they form a unique fingerprint.

Cite this