TY - JOUR
T1 - Adaptive Pooling Operators for Weakly Labeled Sound Event Detection
AU - McFee, Brian
AU - Salamon, Justin
AU - Bello, Juan Pablo
N1 - Funding Information:
Manuscript received April 25, 2018; revised July 13, 2018; accepted July 17, 2018. Date of current version August 13, 2018. This work was supported in part by the Moore-Sloan Data Science Environment at NYU, in part by National Science Foundation Awards 1544753 and 1633259, and in part by the Google Faculty Award. The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Alexey Ozerov. (Corresponding author: Brian McFee.) B. McFee is with the Music and Audio Research Laboratory and the Center for Data Science, New York University, New York, NY 10003 USA (e-mail:, brian.mcfee@nyu.edu).
Funding Information:
This work was supported in part by the Moore-Sloan Data Science Environment at NYU, in part by National Science Foundation Awards 1544753 and 1633259, and in part by the Google Faculty Award.
Publisher Copyright:
© 2014 IEEE.
PY - 2018/11
Y1 - 2018/11
N2 - Sound event detection (SED) methods are tasked with labeling segments of audio recordings by the presence of active sound sources. SED is typically posed as a supervised machine learning problem, requiring strong annotations for the presence or absence of each sound source at every time instant within the recording. However, strong annotations of this type are both labor- and cost-intensive for human annotators to produce, which limits the practical scalability of SED methods. In this paper, we treat SED as a multiple instance learning (MIL) problem, where training labels are static over a short excerpt, indicating the presence or absence of sound sources but not their temporal locality. The models, however, must still produce temporally dynamic predictions, which must be aggregated (pooled) when comparing against static labels during training. To facilitate this aggregation, we develop a family of adaptive pooling operators - referred to as autopool - which smoothly interpolate between common pooling operators, such as min-, max-, or average-pooling, and automatically adapt to the characteristics of the sound sources in question. We evaluate the proposed pooling operators on three datasets, and demonstrate that in each case, the proposed methods outperform nonadaptive pooling operators for static prediction, and nearly match the performance of models trained with strong, dynamic annotations. The proposed method is evaluated in conjunction with convolutional neural networks, but can be readily applied to any differentiable model for time-series label prediction. While this paper focuses on SED applications, the proposed methods are general, and could be applied widely to MIL problems in any domain.
AB - Sound event detection (SED) methods are tasked with labeling segments of audio recordings by the presence of active sound sources. SED is typically posed as a supervised machine learning problem, requiring strong annotations for the presence or absence of each sound source at every time instant within the recording. However, strong annotations of this type are both labor- and cost-intensive for human annotators to produce, which limits the practical scalability of SED methods. In this paper, we treat SED as a multiple instance learning (MIL) problem, where training labels are static over a short excerpt, indicating the presence or absence of sound sources but not their temporal locality. The models, however, must still produce temporally dynamic predictions, which must be aggregated (pooled) when comparing against static labels during training. To facilitate this aggregation, we develop a family of adaptive pooling operators - referred to as autopool - which smoothly interpolate between common pooling operators, such as min-, max-, or average-pooling, and automatically adapt to the characteristics of the sound sources in question. We evaluate the proposed pooling operators on three datasets, and demonstrate that in each case, the proposed methods outperform nonadaptive pooling operators for static prediction, and nearly match the performance of models trained with strong, dynamic annotations. The proposed method is evaluated in conjunction with convolutional neural networks, but can be readily applied to any differentiable model for time-series label prediction. While this paper focuses on SED applications, the proposed methods are general, and could be applied widely to MIL problems in any domain.
KW - Sound event detection
KW - deep learning
KW - machine learning
KW - multiple instance learning
UR - http://www.scopus.com/inward/record.url?scp=85052399585&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85052399585&partnerID=8YFLogxK
U2 - 10.1109/TASLP.2018.2858559
DO - 10.1109/TASLP.2018.2858559
M3 - Article
AN - SCOPUS:85052399585
SN - 2329-9290
VL - 26
SP - 2180
EP - 2193
JO - IEEE/ACM Transactions on Audio Speech and Language Processing
JF - IEEE/ACM Transactions on Audio Speech and Language Processing
IS - 11
M1 - 8434391
ER -