Music Gesture for Visual Sound Separation

被引:148
作者
Gan, Chuang [1 ,2 ]
Huang, Deng [2 ]
Zhao, Hang [1 ]
Tenenbaum, Joshua B. [1 ]
Torralba, Antonio [1 ]
机构
[1] MIT, Cambridge, MA 02139 USA
[2] MIT IBM Watson AI Lab, Cambridge, MA 02142 USA
来源
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020) | 2020年
关键词
D O I
10.1109/CVPR42600.2020.01049
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent deep learning approaches have achieved impressive performance on visual sound separation tasks. However, these approaches are mostly built on appearance and optical flow like motion feature representations, which exhibit limited abilities to find the correlations between audio signals and visual points, especially when separating multiple instruments of the same types, such as multiple violins in a scene. To address this, we propose "Music Gesture," a keypoint-based structured representation to explicitly model the body and finger movements of musicians when they perform music. We first adopt a context-aware graph network to integrate visual semantic context with body dynamics, and then apply an audio-visual fusion model to associate body movements with the corresponding audio signals. Experimental results on three music performance datasets show: 1) strong improvements upon benchmark metrics for hetero-musical separation tasks (i.e. different instruments); 2) new ability for effective homo-musical separation for piano, flute, and trumpet duets, which to our best knowledge has never been achieved with alternative methods.
引用
收藏
页码:10475 / 10484
页数:10
相关论文
共 57 条
[1]   Emotion Recognition in Speech using Cross-Modal Transfer in the Wild [J].
Albanie, Samuel ;
Nagrani, Arsha ;
Vedaldi, Andrea ;
Zisserman, Andrew .
PROCEEDINGS OF THE 2018 ACM MULTIMEDIA CONFERENCE (MM'18), 2018, :292-301
[2]   Look, Listen and Learn [J].
Arandjelovic, Relja ;
Zisserman, Andrew .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :609-617
[3]  
Aytar Y., 2016, Advances in neural information processing systems, V29, P892
[4]  
Barzelay Zohar, 2007, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), P1, DOI 10.1109/CVPR.2007.383344
[5]  
Brand M, 1999, COMP GRAPH, P21, DOI 10.1145/311535.311537
[6]  
Bregler C., 1997, Video rewrite: driving visual speech with audio. Proceedings of the 24th annual conference on computer graphics and interactive techniques
[7]  
Cao Zhe, 2018, arXiv
[8]   Monoaural Audio Source Separation Using Deep Convolutional Neural Networks [J].
Chandna, Pritish ;
Miron, Marius ;
Janer, Jordi ;
Gomez, Emilia .
LATENT VARIABLE ANALYSIS AND SIGNAL SEPARATION (LVA/ICA 2017), 2017, 10169 :258-266
[9]   Lip Reading Sentences in the Wild [J].
Chung, Joon Son ;
Senior, Andrew ;
Vinyals, Oriol ;
Zisserman, Andrew .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :3444-3450
[10]   Nonnegative matrix and tensor factorization [J].
Cichocki, Andrzej ;
Zdunek, Rafal ;
Amari, Shun-Ichi .
IEEE SIGNAL PROCESSING MAGAZINE, 2008, 25 (01) :142-145