Cloze Test Helps: Effective Video Anomaly Detection via Learning to Complete Video Events

被引:140
作者
Yu, Guang [1 ]
Wang, Siqi [1 ]
Cai, Zhiping [1 ]
Zhu, En [1 ]
Xu, Chuanfu [1 ]
Yin, Jianping [2 ]
Kloft, Marius [3 ]
机构
[1] Natl Univ Def Technol, Changsha, Peoples R China
[2] Dongguan Univ Technol, Dongguan, Peoples R China
[3] TU Kaiserslautern, Kaiserslautern, Germany
来源
MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA | 2020年
基金
中国国家自然科学基金;
关键词
Video anomaly detection; video event completion; CLASSIFICATION;
D O I
10.1145/3394171.3413973
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
As a vital topic in media content interpretation, video anomaly detection (VAD) has made fruitful progress via deep neural network (DNN). However, existing methods usually follow a reconstruction or frame prediction routine. They suffer from two gaps: (1) They cannot localize video activities in a both precise and comprehensive manner. (2) They lack sufficient abilities to utilize high-level semantics and temporal context information. Inspired by frequently-used cloze test in language study, we propose a brand-new VAD solution named Video Event Completion (VEC) to bridge gaps above: First, we propose a novel pipeline to achieve both precise and comprehensive enclosure of video activities. Appearance and motion are exploited as mutually complimentary cues to localize regions of interest (RoIs). A normalized spatio-temporal cube (STC) is built from each RoI as a video event, which lays the foundation of VEC and serves as a basic processing unit. Second, we encourage DNN to capture high-level semantics by solving a visual cloze test. To build such a visual cloze test, a certain patch of STC is erased to yield an incomplete event (IE). The DNN learns to restore the original video event from the IE by inferring the missing patch. Third, to incorporate richer motion dynamics, another DNN is trained to infer erased patches' optical flow. Finally, two ensemble strategies using different types of IE and modalities are proposed to boost VAD performance, so as to fully exploit the temporal context and modality information for VAD. VEC can consistently outperform state-of-the-art methods by a notable margin (typically 1.5%-5% AUROC) on commonly-used VAD benchmarks. Our codes and results can be verified at github.com/yuguangnudt/VEC_VAD.
引用
收藏
页码:583 / 591
页数:9
相关论文
共 47 条
[1]   Latent Space Autoregression for Novelty Detection [J].
Abati, Davide ;
Porrello, Angelo ;
Calderara, Simone ;
Cucchiara, Rita .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :481-490
[2]  
Antic B, 2011, IEEE I CONF COMP VIS, P2415, DOI 10.1109/ICCV.2011.6126525
[3]  
Basharat A, 2008, PROC CVPR IEEE, P1301
[4]  
Cheng KW, 2015, PROC CVPR IEEE, P2909, DOI 10.1109/CVPR.2015.7298909
[5]   Sparse Reconstruction Cost for Abnormal Event Detection [J].
Cong, Yang ;
Yuan, Junsong ;
Liu, Ji .
2011 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2011, :1807-+
[6]  
Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
[7]   Ensemble methods in machine learning [J].
Dietterich, TG .
MULTIPLE CLASSIFIER SYSTEMS, 2000, 1857 :1-15
[8]   Memorizing Normality to Detect Anomaly: Memory-augmented Deep Autoencoder for Unsupervised Anomaly Detection [J].
Gong, Dong ;
Liu, Lingqiao ;
Le, Vuong ;
Saha, Budhaditya ;
Mansour, Moussa Reda ;
Venkatesh, Svetha ;
van den Hengel, Anton .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :1705-1714
[9]   Learning Temporal Regularity in Video Sequences [J].
Hasan, Mahmudul ;
Choi, Jonghyun ;
Neumann, Jan ;
Roy-Chowdhury, Amit K. ;
Davis, Larry S. .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :733-742
[10]   Joint Detection and Recounting of Abnormal Events by Learning Deep Generic Knowledge [J].
Hinami, Ryota ;
Mei, Tao ;
Satoh, Shin'ichi .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :3639-3647