Graph Convolutional Networks for Temporal Action Localization

被引:426
作者
Zeng, Runhao [1 ,2 ]
Huang, Wenbing [2 ,5 ]
Tan, Mingkui [1 ,4 ]
Rong, Yu [2 ]
Zhao, Peilin [2 ]
Huang, Junzhou [2 ]
Gan, Chuang [3 ]
机构
[1] South China Univ Technol, Sch Software Engn, Guangzhou, Peoples R China
[2] Tencent AI Lab, Shenzhen, Peoples R China
[3] MIT, IBM Watson AI Lab, Cambridge, MA 02139 USA
[4] Peng Cheng Lab, Shenzhen, Peoples R China
[5] Tsinghua Univ, State Key Lab Intelligent Technol & Syst, Tsinghua Natl Lab Informat Sci & Technol TNList, Dept Comp Sci & Technol, Beijing, Peoples R China
来源
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019) | 2019年
基金
中国国家自然科学基金;
关键词
D O I
10.1109/ICCV.2019.00719
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Most state-of-the-art action localization systems process each action proposal individually, without explicitly exploiting their relations during learning. However, the relations between proposals actually play an important role in action localization, since a meaningful action always consists of multiple proposals in a video. In this paper, we propose to exploit the proposal-proposal relations using Graph Convolutional Networks (GCNs). First, we construct an action proposal graph, where each proposal is represented as a node and their relations between two proposals as an edge. Here, we use two types of relations, one for capturing the context information for each proposal and the other one for characterizing the correlations between distinct actions. Then we apply the GCNs over the graph to model the relations among different proposals and learn powerful representations for the action classification and localization. Experimental results show that our approach significantly outperforms the state-of-the-art on THUMOS14 (49.1% versus 42.8%). Moreover, augmentation experiments on ActivityNet also verify the efficacy of modeling action proposal relationships.
引用
收藏
页码:7093 / 7102
页数:10
相关论文
共 53 条
[31]   Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks [J].
Ren, Shaoqing ;
He, Kaiming ;
Girshick, Ross ;
Sun, Jian .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2017, 39 (06) :1137-1149
[32]   Temporal Action Detection using a Statistical Language Model [J].
Richard, Alexander ;
Gall, Juergen .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :3131-3140
[33]  
Shen Yantao, 2018, EUR C COMP VIS SEPT
[34]   CDC: Convolutional-De-Convolutional Networks for Precise Temporal Action Localization in Untrimmed Videos [J].
Shou, Zheng ;
Chan, Jonathan ;
Zareian, Alireza ;
Miyazawa, Kazuyuki ;
Chang, Shih-Fu .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :1417-1426
[35]   Temporal Action Localization in Untrimmed Videos via Multi-stage CNNs [J].
Shou, Zheng ;
Wang, Dongang ;
Chang, Shih-Fu .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :1049-1058
[36]  
Simonyan Karen, 2014, ADV NEURAL INFORM PR, DOI DOI 10.1002/14651858.CD001941.PUB3
[37]  
Singh G., 2016, ActivityNet Large Scale Activity Recognition Challenge
[38]  
Tan MK, 2015, PROC CVPR IEEE, P4100, DOI 10.1109/CVPR.2015.7299037
[39]  
Vaswani A, 2017, ARXIV
[40]  
Wang Li-mei, 2016, Electric Machines and Control, V20, P1, DOI 10.15938/j.emc.2016.09.001