Abstract
Natural sounds contain acoustic dynamics ranging from tens to hundreds of milliseconds. How does the human auditory system encode acoustic information over wide-ranging timescales to achieve sound recognition? Previous work (Teng et al. 2017) demonstrated a temporal coding preference for the theta and gamma ranges, but it remains unclear how acoustic dynamics between these two ranges are coded. Here, we generated artificial sounds with temporal structures over timescales from 200 to 30 ms and investigated temporal coding on different timescales. Participants discriminated sounds with temporal structures at different timescales while undergoing magnetoencephalography recording. Although considerable intertrial phase coherence can be induced by acoustic dynamics of all the timescales, classification analyses reveal that the acoustic information of all timescales is preferentially differentiated through the theta and gamma bands, but not through the alpha and beta bands; stimulus reconstruction shows that the acoustic dynamics in the theta and gamma ranges are preferentially coded. We demonstrate that the theta and gamma bands show the generality of temporal coding with comparable capacity. Our findings provide a novel perspective-acoustic information of all timescales is discretised into two discrete temporal chunks for further perceptual analysis.
Original language | English (US) |
---|---|
Pages (from-to) | 2600-2614 |
Number of pages | 15 |
Journal | Cerebral Cortex |
Volume | 30 |
Issue number | 4 |
DOIs | |
State | Published - Apr 14 2020 |
Keywords
- asymmetric sampling
- discretization
- multiplexing
- temporal channel
- temporal processing
ASJC Scopus subject areas
- Cognitive Neuroscience
- Cellular and Molecular Neuroscience