Cloze Test Helps: Effective Video Anomaly Detection via Learning to Complete Video Events

被引:117
|
作者
Yu, Guang [1 ]
Wang, Siqi [1 ]
Cai, Zhiping [1 ]
Zhu, En [1 ]
Xu, Chuanfu [1 ]
Yin, Jianping [2 ]
Kloft, Marius [3 ]
机构
[1] Natl Univ Def Technol, Changsha, Peoples R China
[2] Dongguan Univ Technol, Dongguan, Peoples R China
[3] TU Kaiserslautern, Kaiserslautern, Germany
来源
MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA | 2020年
基金
中国国家自然科学基金;
关键词
Video anomaly detection; video event completion; CLASSIFICATION;
D O I
10.1145/3394171.3413973
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
As a vital topic in media content interpretation, video anomaly detection (VAD) has made fruitful progress via deep neural network (DNN). However, existing methods usually follow a reconstruction or frame prediction routine. They suffer from two gaps: (1) They cannot localize video activities in a both precise and comprehensive manner. (2) They lack sufficient abilities to utilize high-level semantics and temporal context information. Inspired by frequently-used cloze test in language study, we propose a brand-new VAD solution named Video Event Completion (VEC) to bridge gaps above: First, we propose a novel pipeline to achieve both precise and comprehensive enclosure of video activities. Appearance and motion are exploited as mutually complimentary cues to localize regions of interest (RoIs). A normalized spatio-temporal cube (STC) is built from each RoI as a video event, which lays the foundation of VEC and serves as a basic processing unit. Second, we encourage DNN to capture high-level semantics by solving a visual cloze test. To build such a visual cloze test, a certain patch of STC is erased to yield an incomplete event (IE). The DNN learns to restore the original video event from the IE by inferring the missing patch. Third, to incorporate richer motion dynamics, another DNN is trained to infer erased patches' optical flow. Finally, two ensemble strategies using different types of IE and modalities are proposed to boost VAD performance, so as to fully exploit the temporal context and modality information for VAD. VEC can consistently outperform state-of-the-art methods by a notable margin (typically 1.5%-5% AUROC) on commonly-used VAD benchmarks. Our codes and results can be verified at github.com/yuguangnudt/VEC_VAD.
引用
收藏
页码:583 / 591
页数:9
相关论文
共 50 条
  • [1] Video Anomaly Detection via Visual Cloze Tests
    Yu, Guang
    Wang, Siqi
    Cai, Zhiping
    Liu, Xinwang
    Zhu, En
    Yin, Jianping
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 : 4955 - 4969
  • [2] Perceptual metric learning for video anomaly detection
    Ramachandra, Bharathkumar
    Jones, Michael
    Vatsavai, Ranga Raju
    MACHINE VISION AND APPLICATIONS, 2021, 32 (03)
  • [3] Normality Learning in Multispace for Video Anomaly Detection
    Zhang, Yu
    Nie, Xiushan
    He, Rundong
    Chen, Meng
    Yin, Yilong
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2021, 31 (09) : 3694 - 3706
  • [4] Video anomaly detection guided by clustering learning
    Qiu, Shaoming
    Ye, Jingfeng
    Zhao, Jiancheng
    He, Lei
    Liu, Liangyu
    E, Bicong
    Huang, Xinchen
    PATTERN RECOGNITION, 2024, 153
  • [5] Perceptual metric learning for video anomaly detection
    Bharathkumar Ramachandra
    Michael Jones
    Ranga Raju Vatsavai
    Machine Vision and Applications, 2021, 32
  • [6] Regularity Learning via Explicit Distribution Modeling for Skeletal Video Anomaly Detection
    Yu, Shoubin
    Zhao, Zhongyin
    Fang, Haoshu
    Deng, Andong
    Su, Haisheng
    Wang, Dongliang
    Gan, Weihao
    Lu, Cewu
    Wu, Wei
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (08) : 6661 - 6673
  • [7] Video anomaly detection via pseudo-anomaly generation and multi-grained feature learning
    Deng, Haigang
    Yang, Qingyang
    Li, Chengwei
    Liang, Hanzhong
    Wang, Chuanxu
    JOURNAL OF ELECTRONIC IMAGING, 2025, 34 (01)
  • [8] Multiple Instance Relational Learning for Video Anomaly Detection
    Dengxiong, Xiwen
    Bao, Wentao
    Kong, Yu
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [9] Video Anomaly Detection for Surveillance Based on Effective Frame Area
    Yang, Yuxing
    Xian, Yang
    Fu, Zeyu
    Naqvi, Syed Mohsen
    2021 IEEE 24TH INTERNATIONAL CONFERENCE ON INFORMATION FUSION (FUSION), 2021, : 1136 - 1140
  • [10] Weakly-supervised video anomaly detection via temporal resolution feature learning
    Shengjun Peng
    Yiheng Cai
    Zijun Yao
    Meiling Tan
    Applied Intelligence, 2023, 53 : 30607 - 30625