Modeling Multimodal Clues in a Hybrid Deep Learning Framework for Video Classification

被引:96
作者
Jiang, Yu-Gang [1 ]
Wu, Zuxuan [1 ]
Tang, Jinhui [2 ]
Li, Zechao [2 ]
Xue, Xiangyang [1 ]
Chang, Shih-Fu [3 ]
机构
[1] Fudan Univ, Sch Comp Sci, Shanghai 200433, Peoples R China
[2] Nanjing Univ Sci & Technol, Sch Comp Sci & Engn, Nanjing 210094, Jiangsu, Peoples R China
[3] Columbia Univ, Dept Elect Engn, New York, NY 10027 USA
关键词
Deep learning; framework; fusion; video classification; ACTION RECOGNITION; EVENT RECOGNITION; LATE FUSION; HISTOGRAMS; FEATURES;
D O I
10.1109/TMM.2018.2823900
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Videos are inherently multimodal. This paper studies the problem of exploiting the abundant multimodal clues for improved video classification performance. We introduce a novel hybrid deep learning framework that integrates useful clues from multiple modalities, including static spatial appearance information, motion patterns within a short time window, audio information, as well as long-range temporal dynamics. More specifically, we utilize three Convolutional Neural Networks (CNNs) operating on appearance, motion, and audio signals to extract their corresponding features. We then employ a feature fusion network to derive a unified representation with an aim to capture the relationships among features. Furthermore, to exploit the long-range temporal dynamics in videos, we apply two long short-term memory (LSTM) networks with extracted appearance and motion features as inputs. Finally, we also propose refining the prediction scores by leveraging contextual relationships among video semantics. The hybrid deep learning framework is able to exploit a comprehensive set of multimodal features for video classification. Through an extensive set of experiments, we demonstrate that: 1) LSTM networks that model sequences in an explicitly recurrent manner are highly complementary to the CNN models; 2) the feature fusion network that produces a fused representation through modeling feature relationships outperforms a large set of alternative fusion strategies; and 3) the semantic context of video classes can help further refine the predictions for improved performance. Experimental results on two challenging benchmarks-the UCF-101 and the Columbia Consumer Videos (CCV)-provide strong quantitative evidence that our framework can produce promising results: 93.1% on the UCF-101 and 84.5% on the CCV, outperforming several competing methods with clear margins.
引用
收藏
页码:3137 / 3147
页数:11
相关论文
共 65 条
[1]  
[Anonymous], 2011, P 1 INT C MULT RETR
[2]  
[Anonymous], 2014, ADV NEURAL INFORM PR
[3]  
[Anonymous], P IEEE C COMP VIS PA
[4]  
[Anonymous], 2012, P INT C NEUR INF PRO
[5]  
[Anonymous], 2013, IEEE T PATTERN ANAL, DOI DOI 10.1109/TPAMI.2012.59
[6]  
[Anonymous], PROC CVPR IEEE
[7]  
[Anonymous], 2011, P 28 INT C MACH LEAR
[8]  
[Anonymous], 2015, P 3 INT C LEARN REPR
[9]  
[Anonymous], 2007, P 6 INT JOINT C AUT
[10]  
[Anonymous], P BRIT MACH VIS C