Cloze Test Helps: Effective Video Anomaly Detection via Learning to Complete Video Events

被引:117
|
作者
Yu, Guang [1 ]
Wang, Siqi [1 ]
Cai, Zhiping [1 ]
Zhu, En [1 ]
Xu, Chuanfu [1 ]
Yin, Jianping [2 ]
Kloft, Marius [3 ]
机构
[1] Natl Univ Def Technol, Changsha, Peoples R China
[2] Dongguan Univ Technol, Dongguan, Peoples R China
[3] TU Kaiserslautern, Kaiserslautern, Germany
来源
MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA | 2020年
基金
中国国家自然科学基金;
关键词
Video anomaly detection; video event completion; CLASSIFICATION;
D O I
10.1145/3394171.3413973
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
As a vital topic in media content interpretation, video anomaly detection (VAD) has made fruitful progress via deep neural network (DNN). However, existing methods usually follow a reconstruction or frame prediction routine. They suffer from two gaps: (1) They cannot localize video activities in a both precise and comprehensive manner. (2) They lack sufficient abilities to utilize high-level semantics and temporal context information. Inspired by frequently-used cloze test in language study, we propose a brand-new VAD solution named Video Event Completion (VEC) to bridge gaps above: First, we propose a novel pipeline to achieve both precise and comprehensive enclosure of video activities. Appearance and motion are exploited as mutually complimentary cues to localize regions of interest (RoIs). A normalized spatio-temporal cube (STC) is built from each RoI as a video event, which lays the foundation of VEC and serves as a basic processing unit. Second, we encourage DNN to capture high-level semantics by solving a visual cloze test. To build such a visual cloze test, a certain patch of STC is erased to yield an incomplete event (IE). The DNN learns to restore the original video event from the IE by inferring the missing patch. Third, to incorporate richer motion dynamics, another DNN is trained to infer erased patches' optical flow. Finally, two ensemble strategies using different types of IE and modalities are proposed to boost VAD performance, so as to fully exploit the temporal context and modality information for VAD. VEC can consistently outperform state-of-the-art methods by a notable margin (typically 1.5%-5% AUROC) on commonly-used VAD benchmarks. Our codes and results can be verified at github.com/yuguangnudt/VEC_VAD.
引用
收藏
页码:583 / 591
页数:9
相关论文
共 50 条
  • [31] Unsupervised video anomaly detection via normalizing flows with implicit latent features
    Cho, MyeongAh
    Kim, Taeoh
    Kim, Woo Jin
    Cho, Suhwan
    Lee, Sangyoun
    PATTERN RECOGNITION, 2022, 129
  • [32] An unsupervised video anomaly detection method via Optical Flow decomposition and Spatio-Temporal feature learning
    Fan, Jin
    Ji, Yuxiang
    Wu, Huifeng
    Ge, Yan
    Sun, Danfeng
    Wu, Jia
    PATTERN RECOGNITION LETTERS, 2024, 185 : 239 - 246
  • [33] Context-related video anomaly detection via generative adversarial network
    Li, Daoheng
    Nie, Xiushan
    Li, Xiaofeng
    Zhang, Yu
    Yin, Yilong
    PATTERN RECOGNITION LETTERS, 2022, 156 : 183 - 189
  • [34] A comprehensive review on deep learning-based methods for video anomaly detection
    Nayak, Rashmiranjan
    Pati, Umesh Chandra
    Das, Santos Kumar
    IMAGE AND VISION COMPUTING, 2021, 106
  • [35] Fast video anomaly detection via context-aware shortcut exploration and abnormal feature distance learning
    Park, Chaewon
    Kim, Donghyeong
    Cho, Myeongah
    Kim, Minjung
    Lee, Minseok
    Park, Seungwook
    Lee, Sangyoun
    PATTERN RECOGNITION, 2025, 157
  • [36] Video anomaly detection with both normal and anomaly memory modules
    Zhang, Liang
    Li, Shifeng
    Luo, Xi
    Liu, Xiaoru
    Zhang, Ruixuan
    VISUAL COMPUTER, 2024, : 3003 - 3015
  • [37] Learning Graph Enhanced Spatial-Temporal Coherence for Video Anomaly Detection
    Cheng, Kai
    Liu, Yang
    Zeng, Xinhua
    IEEE SIGNAL PROCESSING LETTERS, 2023, 30 : 314 - 318
  • [38] Learning Spatiotemporal Features With 3DCNN and ConvGRU for Video Anomaly Detection
    Wang, Xin
    Xie, Weixin
    Song, Jiayi
    PROCEEDINGS OF 2018 14TH IEEE INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING (ICSP), 2018, : 474 - 479
  • [39] Learning a multi-cluster memory prototype for unsupervised video anomaly detection
    Wu, Yuntao
    Zeng, Kun
    Li, Zuoyong
    Peng, Zhonghua
    Chen, Xiaobo
    Hu, Rong
    INFORMATION SCIENCES, 2025, 686
  • [40] Adaptive Sparse Representations for Video Anomaly Detection
    Mo, Xuan
    Monga, Vishal
    Bala, Raja
    Fan, Zhigang
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2014, 24 (04) : 631 - 645