Adaptive Pooling Operators for Weakly Labeled Sound Event Detection

被引:102
作者
McFee, Brian [1 ,2 ]
Salamon, Justin [1 ,3 ]
Bello, Juan Pablo [1 ]
机构
[1] NYU, Mus & Audio Res Lab, 550 1St Ave, New York, NY 10003 USA
[2] NYU, Ctr Data Sci, 550 1St Ave, New York, NY 10003 USA
[3] NYU, Ctr Urban Sci & Progress, Brooklyn, NY 11201 USA
基金
美国国家科学基金会;
关键词
Sound event detection; machine learning; multiple instance learning; deep learning;
D O I
10.1109/TASLP.2018.2858559
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Sound event detection (SED) methods are tasked with labeling segments of audio recordings by the presence of active sound sources. SED is typically posed as a supervised machine learning problem, requiring strong annotations for the presence or absence of each sound source at every time instant within the recording. However, strong annotations of this type are both labor- and cost-intensive for human annotators to produce, which limits the practical scalability of SED methods. In this paper, we treat SED as a multiple instance learning (MIL) problem, where training labels are static over a short excerpt, indicating the presence or absence of sound sources but not their temporal locality. The models, however, must still produce temporally dynamic predictions, which must be aggregated (pooled) when comparing against static labels during training. To facilitate this aggregation, we develop a family of adaptive pooling operators-referred to as autopool-which smoothly interpolate between common pooling operators, such as min-, max-, or average-pooling, and automatically adapt to the characteristics of the sound sources in question. We evaluate the proposed pooling operators on three datasets, and demonstrate that in each case, the proposed methods outperform nonadaptive pooling operators for static prediction, and nearly match the performance of models trained with strong, dynamic annotations. The proposed method is evaluated in conjunction with convolutional neural networks, but can be readily applied to any differentiable model for time-series label prediction. While this paper focuses on SED applications, the proposed methods are general, and could be applied widely to MIL problems in any domain.
引用
收藏
页码:2180 / 2193
页数:14
相关论文
共 75 条
[32]  
Gemmeke JF, 2017, INT CONF ACOUST SPEE, P776, DOI 10.1109/ICASSP.2017.7952261
[33]  
Gemmeke JortF., 2013, 2013 IEEE workshop on applications of signal processing to audio and acoustics, P1, DOI DOI 10.1109/WASPAA.2013.6701847
[34]  
Goetze Stefan, 2012, Journal of Computing Science and Engineering, V6, P40, DOI 10.5626/JCSE.2012.6.1.40
[35]   Context-dependent sound event detection [J].
Heittola, Toni ;
Mesaros, Annamaria ;
Eronen, Antti ;
Virtanen, Tuomas .
EURASIP JOURNAL ON AUDIO SPEECH AND MUSIC PROCESSING, 2013,
[36]  
Heittola T, 2013, INT CONF ACOUST SPEE, P8677, DOI 10.1109/ICASSP.2013.6639360
[37]  
Hershey S, 2017, INT CONF ACOUST SPEE, P131, DOI 10.1109/ICASSP.2017.7952132
[38]  
Hofmann T., 2002, P NEURIPS, P577, DOI [DOI 10.5555/2968618.2968690, 10.5555/2968618.2968690]
[39]  
HOLM S, 1979, SCAND J STAT, V6, P65
[40]   Augmented Multiple Instance Regression for Inferring Object Contours in Bounding Boxes [J].
Hsu, Kuang-Jui ;
Lin, Yen-Yu ;
Chuang, Yung-Yu .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2014, 23 (04) :1722-1736