Robust downbeat tracking using an ensemble of convolutional networks

Simon Durand, Juan Pablo Bello, Bertrand David, Gael Richard

Research output: Contribution to journalArticlepeer-review

Abstract

In this paper, we present a novel state-of-the-art system for automatic downbeat tracking from music signals. The audio signal is first segmented in frames which are synchronized at the tatum level of the music. We then extract different kind of features based on harmony, melody, rhythm, and bass content to feed convolutional neural networks that are adapted to take advantage of the characteristics of each feature. This ensemble of neural networks is combined to obtain one downbeat likelihood per tatum. The downbeat sequence is finally decoded with a flexible and efficient temporal model which takes advantage of the assumed metrical continuity of a song. We then perform an evaluation of our system on a large base of nine datasets, compare its performance to four other published algorithms and obtain a significant increase of 16.8% points compared to the second-best system, for altogether a moderate cost in test and training. The influence of each step of the method is studied to show its strengths and shortcomings.

Original languageEnglish (US)
Pages (from-to)72-85
Number of pages14
JournalIEEE/ACM Transactions on Audio Speech and Language Processing
Volume25
Issue number1
DOIs
StatePublished - Jan 2017

Keywords

  • Convolutional neural networks
  • downbeat tracking
  • music information retrieval
  • music signal processing

ASJC Scopus subject areas

  • Computer Science (miscellaneous)
  • Acoustics and Ultrasonics
  • Computational Mathematics
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Robust downbeat tracking using an ensemble of convolutional networks'. Together they form a unique fingerprint.

Cite this