XFlow: Cross-Modal Deep Neural Networks for Audiovisual Classification

被引:17
作者
Cangea, Catalina [1 ]
Velickovic, Petar [1 ]
Lio, Pietro [1 ]
机构
[1] Univ Cambridge, Dept Comp Sci & Technol, Cambridge CB03DF, England
关键词
Feature extraction; Task analysis; Videos; Correlation; Visualization; Neural networks; Deep learning; Audiovisual; cross modality; deep learning; integration; machine learning; multimodal;
D O I
10.1109/TNNLS.2019.2945992
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In recent years, there have been numerous developments toward solving multimodal tasks, aiming to learn a stronger representation than through a single modality. Certain aspects of the data can be particularly useful in this case-for example, correlations in the space or time domain across modalities-but should be wisely exploited in order to benefit from their full predictive potential. We propose two deep learning architectures with multimodal cross connections that allow for dataflow between several feature extractors (XFlow). Our models derive more interpretable features and achieve better performances than models that do not exchange representations, usefully exploiting correlations between audio and visual data, which have a different dimensionality and are nontrivially exchangeable. This article improves on the existing multimodal deep learning algorithms in two essential ways: 1) it presents a novel method for performing cross modality (before features are learned from individual modalities) and 2) extends the previously proposed cross connections that only transfer information between the streams that process compatible data. Illustrating some of the representations learned by the connections, we analyze their contribution to the increase in discrimination ability and reveal their compatibility with a lip-reading network intermediate representation. We provide the research community with Digits, a new data set consisting of three data types extracted from videos of people saying the digits 0-9. Results show that both cross-modal architectures outperform their baselines (by up to 11.5%) when evaluated on the AVletters, CUAVE, and Digits data sets, achieving the state-of-the-art results.
引用
收藏
页码:3711 / 3720
页数:10
相关论文
共 26 条
[1]   Emotion Recognition in Speech using Cross-Modal Transfer in the Wild [J].
Albanie, Samuel ;
Nagrani, Arsha ;
Vedaldi, Andrea ;
Zisserman, Andrew .
PROCEEDINGS OF THE 2018 ACM MULTIMEDIA CONFERENCE (MM'18), 2018, :292-301
[2]  
Aytar Y., 2017, ARXIV170600932
[3]   Cross-Modal Scene Networks [J].
Aytar, Yusuf ;
Castrejon, Lluis ;
Vondrick, Carl ;
Pirsiavash, Hamed ;
Torralba, Antonio .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (10) :2303-2314
[4]  
Bernal E.A., 2017, P IEEE C COMP VIS PA, P5447
[5]  
Dou Q, 2018, PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, P691
[6]  
Glorot X., 2010, P 13 INT C ART INT S, P249, DOI DOI 10.1109/LGRS.2016.2565705
[7]   Look, Imagine and Match: Improving Textual-Visual Cross-Modal Retrieval with Generative Models [J].
Gu, Jiuxiang ;
Cai, Jianfei ;
Joty, Shafiq ;
Niu, Li ;
Wang, Gang .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :7181-7189
[8]   Cross Modal Distillation for Supervision Transfer [J].
Gupta, Saurabh ;
Hoffman, Judy ;
Malik, Jitendra .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :2827-2836
[9]  
HE KM, 2016, PROC CVPR IEEE, P770, DOI DOI 10.1109/CVPR.2016.90
[10]  
Ioffe Sergey, 2015, P MACHINE LEARNING R, V37, P448, DOI DOI 10.48550/ARXIV.1502.03167