Cloze Test Helps: Effective Video Anomaly Detection via Learning to Complete Video Events

被引:117
|
作者
Yu, Guang [1 ]
Wang, Siqi [1 ]
Cai, Zhiping [1 ]
Zhu, En [1 ]
Xu, Chuanfu [1 ]
Yin, Jianping [2 ]
Kloft, Marius [3 ]
机构
[1] Natl Univ Def Technol, Changsha, Peoples R China
[2] Dongguan Univ Technol, Dongguan, Peoples R China
[3] TU Kaiserslautern, Kaiserslautern, Germany
来源
MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA | 2020年
基金
中国国家自然科学基金;
关键词
Video anomaly detection; video event completion; CLASSIFICATION;
D O I
10.1145/3394171.3413973
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
As a vital topic in media content interpretation, video anomaly detection (VAD) has made fruitful progress via deep neural network (DNN). However, existing methods usually follow a reconstruction or frame prediction routine. They suffer from two gaps: (1) They cannot localize video activities in a both precise and comprehensive manner. (2) They lack sufficient abilities to utilize high-level semantics and temporal context information. Inspired by frequently-used cloze test in language study, we propose a brand-new VAD solution named Video Event Completion (VEC) to bridge gaps above: First, we propose a novel pipeline to achieve both precise and comprehensive enclosure of video activities. Appearance and motion are exploited as mutually complimentary cues to localize regions of interest (RoIs). A normalized spatio-temporal cube (STC) is built from each RoI as a video event, which lays the foundation of VEC and serves as a basic processing unit. Second, we encourage DNN to capture high-level semantics by solving a visual cloze test. To build such a visual cloze test, a certain patch of STC is erased to yield an incomplete event (IE). The DNN learns to restore the original video event from the IE by inferring the missing patch. Third, to incorporate richer motion dynamics, another DNN is trained to infer erased patches' optical flow. Finally, two ensemble strategies using different types of IE and modalities are proposed to boost VAD performance, so as to fully exploit the temporal context and modality information for VAD. VEC can consistently outperform state-of-the-art methods by a notable margin (typically 1.5%-5% AUROC) on commonly-used VAD benchmarks. Our codes and results can be verified at github.com/yuguangnudt/VEC_VAD.
引用
收藏
页码:583 / 591
页数:9
相关论文
共 50 条
  • [41] Background separation network for video anomaly detection
    Ye, Qing
    Song, Zihan
    Zhao, Yuqi
    Zhang, Yongmei
    JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2024, 46 (03) : 6535 - 6551
  • [42] Video Anomaly Detection by Estimating Likelihood of Representations
    Ouyang, Yuqi
    Sanchez, Victor
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 8984 - 8991
  • [43] Towards Open Set Video Anomaly Detection
    Zhu, Yuansheng
    Bao, Wentao
    Yu, Qi
    COMPUTER VISION, ECCV 2022, PT XXXIV, 2022, 13694 : 395 - 412
  • [44] Video anomaly detection based on scene classification
    Hongjun Li
    Xulin Shen
    Xiaohu Sun
    Yunlong Wang
    Chaobo Li
    Junjie Chen
    Multimedia Tools and Applications, 2023, 82 : 45345 - 45365
  • [45] Video anomaly detection based on scene classification
    Li, Hongjun
    Shen, Xulin
    Sun, Xiaohu
    Wang, Yunlong
    Li, Chaobo
    Chen, Junjie
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (29) : 45345 - 45365
  • [46] An informative dual ForkNet for video anomaly detection
    Li, Hongjun
    Wang, Yunlong
    Wang, Yating
    Chen, Junjie
    NEURAL NETWORKS, 2024, 179
  • [47] Cluster Attention Contrast for Video Anomaly Detection
    Wang, Ziming
    Zou, Yuexian
    Zhang, Zeming
    MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 2020, : 2463 - 2471
  • [48] CamNuvem: A Robbery Dataset for Video Anomaly Detection
    de Paula, Davi D.
    Salvadeo, Denis H. P.
    de Araujo, Darlan M. N.
    SENSORS, 2022, 22 (24)
  • [49] Dual-Scale Temporal Dependency Learning for Unsupervised Video Anomaly Detection
    Li, Xu
    Wang, Xue
    Du, Zexing
    Wang, Qing
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2024, PT X, 2025, 15040 : 284 - 298
  • [50] SYRFA: SYnthetic-to-Real Adaptation via Feature Alignment for Video Anomaly Detection
    Hong, Jonghwan
    Lee, Bokyeung
    Ko, Kyungdeuk
    Ko, Hanseok
    IEEE ACCESS, 2024, 12 : 86242 - 86251