Computational models of auditory perception from feature extraction to stream segregation and behavior

James Rankin, John Rinzel

Research output: Contribution to journalReview articlepeer-review

Abstract

Audition is by nature dynamic, from brainstem processing on sub-millisecond time scales, to segregating and tracking sound sources with changing features, to the pleasure of listening to music and the satisfaction of getting the beat. We review recent advances from computational models of sound localization, of auditory stream segregation and of beat perception/generation. A wealth of behavioral, electrophysiological and imaging studies shed light on these processes, typically with synthesized sounds having regular temporal structure. Computational models integrate knowledge from different experimental fields and at different levels of description. We advocate a neuromechanistic modeling approach that incorporates knowledge of the auditory system from various fields, that utilizes plausible neural mechanisms, and that bridges our understanding across disciplines.

Original languageEnglish (US)
Pages (from-to)46-53
Number of pages8
JournalCurrent Opinion in Neurobiology
Volume58
DOIs
StatePublished - Oct 2019

ASJC Scopus subject areas

  • General Neuroscience

Fingerprint

Dive into the research topics of 'Computational models of auditory perception from feature extraction to stream segregation and behavior'. Together they form a unique fingerprint.

Cite this