Abstract
In this paper, we present a novel state-of-the-art system for automatic downbeat tracking from music signals. The audio signal is first segmented in frames which are synchronized at the tatum level of the music. We then extract different kind of features based on harmony, melody, rhythm, and bass content to feed convolutional neural networks that are adapted to take advantage of the characteristics of each feature. This ensemble of neural networks is combined to obtain one downbeat likelihood per tatum. The downbeat sequence is finally decoded with a flexible and efficient temporal model which takes advantage of the assumed metrical continuity of a song. We then perform an evaluation of our system on a large base of nine datasets, compare its performance to four other published algorithms and obtain a significant increase of 16.8% points compared to the second-best system, for altogether a moderate cost in test and training. The influence of each step of the method is studied to show its strengths and shortcomings.
Original language | English (US) |
---|---|
Pages (from-to) | 72-85 |
Number of pages | 14 |
Journal | IEEE/ACM Transactions on Audio Speech and Language Processing |
Volume | 25 |
Issue number | 1 |
DOIs | |
State | Published - Jan 2017 |
Keywords
- Convolutional neural networks
- downbeat tracking
- music information retrieval
- music signal processing
ASJC Scopus subject areas
- Computer Science (miscellaneous)
- Acoustics and Ultrasonics
- Computational Mathematics
- Electrical and Electronic Engineering