Large Scale Holistic Video Understanding

被引:63
作者
Diba, Ali [1 ,5 ]
Fayyaz, Mohsen [2 ]
Sharma, Vivek [3 ]
Paluri, Manohar [1 ,2 ,3 ,4 ,5 ]
Gall, Jurgen [2 ]
Stiefelhagen, Rainer [3 ]
Van Gool, Luc [1 ,4 ,5 ]
机构
[1] Katholieke Univ Leuven, Leuven, Belgium
[2] Univ Bonn, Bonn, Germany
[3] KIT Karlsruhe, Karlsruhe, Germany
[4] Swiss Fed Inst Technol, Zurich, Switzerland
[5] Sensifai, Brussels, Belgium
来源
COMPUTER VISION - ECCV 2020, PT V | 2020年 / 12350卷
关键词
D O I
10.1007/978-3-030-58558-7_35
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Video recognition has been advanced in recent years by benchmarks with rich annotations. However, research is still mainly limited to human action or sports recognition - focusing on a highly specific video understanding task and thus leaving a significant gap towards describing the overall content of a video. We fill this gap by presenting a large-scale "Holistic Video Understanding Dataset" (HVU). HVU is organized hierarchically in a semantic taxonomy that focuses on multi-label and multi-task video understanding as a comprehensive problem that encompasses the recognition of multiple semantic aspects in the dynamic scene. HVU contains approx. 572k videos in total with 9 million annotations for training, validation and test set spanning over 3142 labels. HVU encompasses semantic aspects defined on categories of scenes, objects, actions, events, attributes and concepts which naturally captures the real-world scenarios. We demonstrate the generalisation capability of HVU on three challenging tasks: 1) Video classification, 2) Video captioning and 3) Video clustering tasks. In particular for video classification, we introduce a new spatio-temporal deep neural network architecture called "Holistic Appearance and Temporal Network" (HATNet) that builds on fusing 2D and 3D architectures into one by combining intermediate representations of appearance and temporal cues. HATNet focuses on the multi-label and multi-task learning problem and is trained in an end-to-end manner. Via our experiments, we validate the idea that holistic representation learning is complementary, and can play a key role in enabling many real-world applications. https://holistic-video-understanding.github.io/.
引用
收藏
页码:593 / 610
页数:18
相关论文
共 64 条
[1]  
Abu-El-Haija S., 2016, arXiv
[2]   2D Human Pose Estimation: New Benchmark and State of the Art Analysis [J].
Andriluka, Mykhaylo ;
Pishchulin, Leonid ;
Gehler, Peter ;
Schiele, Bernt .
2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2014, :3686-3693
[3]  
[Anonymous], 2007, ACM MM
[4]  
Heilbron FC, 2015, PROC CVPR IEEE, P961, DOI 10.1109/CVPR.2015.7298698
[5]   Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset [J].
Carreira, Joao ;
Zisserman, Andrew .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :4724-4733
[6]  
Chen SX, 2019, AAAI CONF ARTIF INTE, P8199
[7]  
cloud.google, Google Vision AI API
[8]   Human detection using oriented histograms of flow and appearance [J].
Dalal, Navneet ;
Triggs, Bill ;
Schmid, Cordelia .
COMPUTER VISION - ECCV 2006, PT 2, PROCEEDINGS, 2006, 3952 :428-441
[9]   Scaling Egocentric Vision: The EPIC-KITCHENS Dataset [J].
Damen, Dima ;
Doughty, Hazel ;
Farinella, Giovanni Maria ;
Fidler, Sanja ;
Furnari, Antonino ;
Kazakos, Evangelos ;
Moltisanti, Davide ;
Munro, Jonathan ;
Perrett, Toby ;
Price, Will ;
Wray, Michael .
COMPUTER VISION - ECCV 2018, PT IV, 2018, 11208 :753-771
[10]  
Diba A., 2018, CVPR WORKSH, P1117