Exploiting spatio-temporal knowledge for video action recognition

被引:3
作者
Zhang, Huigang [1 ]
Wang, Liuan [1 ]
Sun, Jun [1 ]
机构
[1] Fujitsu R&D Ctr, Beijing 100022, Peoples R China
关键词
action recognition; commonsense knowledge; GCN; STKM;
D O I
10.1049/cvi2.12154
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Action recognition has been a popular area of computer vision research in recent years. The goal of this task is to recognise human actions in video frames. Most existing methods often depend on the visual features and their relationships inside the videos. The extracted features only represent the visual information of the current video itself and cannot represent the general knowledge of particular actions beyond the video. Thus, there are some deviations in these features, and the recognition performance still requires improvement. In this sudy, we present a novel spatio-temporal knowledge module (STKM) to endow the current methods with commonsense knowledge. To this end, we first collect hybrid external knowledge from universal fields, which contains both visual and semantic information. Then graph convolution networks (GCN) are used to represent and aggregate this knowledge. The GCNs involve (i) a spatial graph to capture spatial relations and (ii) a temporal graph to capture serial occurrence relations among actions. By integrating knowledge and visual features, we can get better recognition results. Experiments on AVA, UCF101-24 and JHMDB datasets show the robustness and generalisation ability of STKM. The results report a new state-of-the-art 32.0 mAP on AVA v2.1. On UCF101-24 and JHMDB datasets, our method also improves by 1.5 AP and 2.6 AP, respectively, over the baseline method.
引用
收藏
页码:222 / 230
页数:9
相关论文
共 46 条
[11]  
Hamilton WL, 2017, ADV NEUR IN, V30
[12]   Tube Convolutional Neural Network (T-CNN) for Action Detection in Videos [J].
Hou, Rui ;
Chen, Chen ;
Shah, Mubarak .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :5823-5832
[13]   CSKG: The CommonSense Knowledge Graph [J].
Ilievski, Filip ;
Szekely, Pedro ;
Zhang, Bin .
SEMANTIC WEB, ESWC 2021, 2021, 12731 :680-696
[14]   Dimensions of commonsense knowledge [J].
Ilievski, Filip ;
Oltramari, Alessandro ;
Ma, Kaixin ;
Zhang, Bin ;
McGuinness, Deborah L. ;
Szekely, Pedro .
KNOWLEDGE-BASED SYSTEMS, 2021, 229
[15]   Towards understanding action recognition [J].
Jhuang, Hueihan ;
Gall, Juergen ;
Zuffi, Silvia ;
Schmid, Cordelia ;
Black, Michael J. .
2013 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2013, :3192-3199
[16]   Action Tubelet Detector for Spatio-Temporal Action Localization [J].
Kalogeiton, Vicky ;
Weinzaepfel, Philippe ;
Ferrari, Vittorio ;
Schmid, Cordelia .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :4415-4423
[17]  
Kipf TN, 2017, J. Mach. Learn. Res., P1
[18]   Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations [J].
Krishna, Ranjay ;
Zhu, Yuke ;
Groth, Oliver ;
Johnson, Justin ;
Hata, Kenji ;
Kravitz, Joshua ;
Chen, Stephanie ;
Kalantidis, Yannis ;
Li, Li-Jia ;
Shamma, David A. ;
Bernstein, Michael S. ;
Li Fei-Fei .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2017, 123 (01) :32-73
[19]  
Kuehne H, 2011, IEEE I CONF COMP VIS, P2556, DOI 10.1109/ICCV.2011.6126543
[20]   Actions as Moving Points [J].
Li, Yixuan ;
Wang, Zixu ;
Wang, Limin ;
Wu, Gangshan .
COMPUTER VISION - ECCV 2020, PT XVI, 2020, 12361 :68-84