In spoken word recognition, the future predicts the past

Laura Gwilliams, Tal Linzen, David Poeppel, Alec Marantz

Research output: Contribution to journalArticlepeer-review


Speech is an inherently noisy and ambiguous signal. To fluently derive meaning, a listener must integrate contextual information to guide interpretations of the sensory input. Although many studies have demonstrated the influence of prior context on speech perception, the neural mechanisms supportingthe integrationof subsequent context remainunknown. UsingMEGtorecordfromhumanauditorycortex, we analyzed responses to spoken words with a varyingly ambiguous onset phoneme, the identity of which is later disambiguated at the lexical uniqueness point. Fifty participants (both male and female) were recruited across two MEG experiments. Our findings suggest that primary auditory cortex is sensitive to phonological ambiguity very early during processing at just 50 ms after onset. Subphonemic detail is preserved in auditory cortex over long timescales and re-evoked at subsequent phoneme positions. Commitments to phonological categories occur in parallel, resolving on the shorter timescale of ~450 ms. These findings provide evidence that future input determines the perception of earlier speech sounds by maintaining sensory features until they can be integrated with top-down lexical information.

Original languageEnglish (US)
Pages (from-to)7585-7599
Number of pages15
JournalJournal of Neuroscience
Issue number35
StatePublished - Aug 29 2018


  • Auditory processing
  • Lexical access
  • MEG
  • Speech

ASJC Scopus subject areas

  • General Medicine


Dive into the research topics of 'In spoken word recognition, the future predicts the past'. Together they form a unique fingerprint.

Cite this