Toward Video Anomaly Retrieval From Video Anomaly Detection: New Benchmarks and Model

被引:18
作者
Wu, Peng [1 ]
Liu, Jing [2 ]
He, Xiangteng [3 ]
Peng, Yuxin [3 ]
Wang, Peng [1 ]
Zhang, Yanning [1 ]
机构
[1] Northwestern Polytech Univ, Sch Comp Sci, Natl Engn Lab Integrated Aerosp Ground Ocean Big D, Xian 710060, Peoples R China
[2] Xidian Univ, Guangzhou Inst Technol, Guangzhou 510555, Peoples R China
[3] Peking Univ, Wangxuan Inst Comp Technol, Beijing 100871, Peoples R China
基金
中国国家自然科学基金;
关键词
Video anomaly retrieval; video anomaly detection; cross-modal retrieval;
D O I
10.1109/TIP.2024.3374070
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Video anomaly detection (VAD) has been paid increasing attention due to its potential applications, its current dominant tasks focus on online detecting anomalies, which can be roughly interpreted as the binary or multiple event classification. However, such a setup that builds relationships between complicated anomalous events and single labels, e.g., "vandalism", is superficial, since single labels are deficient to characterize anomalous events. In reality, users tend to search a specific video rather than a series of approximate videos. Therefore, retrieving anomalous events using detailed descriptions is practical and positive but few researches focus on this. In this context, we propose a novel task called Video Anomaly Retrieval (VAR), which aims to pragmatically retrieve relevant anomalous videos by cross-modalities, e.g., language descriptions and synchronous audios. Unlike the current video retrieval where videos are assumed to be temporally well-trimmed with short duration, VAR is devised to retrieve long untrimmed videos which may be partially relevant to the given query. To achieve this, we present two large-scale VAR benchmarks and design a model called Anomaly-Led Alignment Network (ALAN) for VAR. In ALAN, we propose an anomaly-led sampling to focus on key segments in long untrimmed videos. Then, we introduce an efficient pretext task to enhance semantic associations between video-text fine-grained representations. Besides, we leverage two complementary alignments to further match cross-modal contents. Experimental results on two benchmarks reveal the challenges of VAR task and also demonstrate the advantages of our tailored method. Captions are publicly released at https://github.com/Roc-Ng/VAR.
引用
收藏
页码:2213 / 2225
页数:13
相关论文
共 90 条
[1]   ViViT: A Video Vision Transformer [J].
Arnab, Anurag ;
Dehghani, Mostafa ;
Heigold, Georg ;
Sun, Chen ;
Lucic, Mario ;
Schmid, Cordelia .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :6816-6826
[2]  
Ba J, 2014, ACS SYM SER
[3]  
BACK T, 1996, EVOLUTIONARY ALGORIT
[4]  
Cao CQ, 2022, Arxiv, DOI [arXiv:2202.06503, 10.1109/LSP.2022.3226411]
[5]   Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset [J].
Carreira, Joao ;
Zisserman, Andrew .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :4724-4733
[6]   Video Moment Retrieval from Text Queries via Single Frame Annotation [J].
Cui, Ran ;
Qian, Tianwen ;
Peng, Pai ;
Daskalaki, Elena ;
Chen, Jingjing ;
Guo, Xiaowei ;
Sun, Huyang ;
Jiang, Yu-Gang .
PROCEEDINGS OF THE 45TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '22), 2022, :1033-1043
[7]   Exploring Language Hierarchy for Video Grounding [J].
Ding, Xinpeng ;
Wang, Nannan ;
Zhang, Shiwei ;
Huang, Ziyuan ;
Li, Xiaomeng ;
Tang, Mingqian ;
Liu, Tongliang ;
Gao, Xinbo .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 :4693-4706
[8]   Dual Encoding for Video Retrieval by Text [J].
Dong, Jianfeng ;
Li, Xirong ;
Xu, Chaoxi ;
Yang, Xun ;
Yang, Gang ;
Wang, Xun ;
Wang, Meng .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (08) :4065-4080
[9]   Adaptive Token Sampling for Efficient Vision Transformers [J].
Fayyaz, Mohsen ;
Koohpayegani, Soroush Abbasi ;
Jafari, Farnoush Rezaei ;
Sengupta, Sunando ;
Joze, Hamid Reza Vaezi ;
Sommerlade, Eric ;
Pirsiavash, Hamed ;
Gall, Juergen .
COMPUTER VISION, ECCV 2022, PT XI, 2022, 13671 :396-414
[10]   MIST: Multiple Instance Self-Training Framework for Video Anomaly Detection [J].
Feng, Jia-Chang ;
Hong, Fa-Ting ;
Zheng, Wei-Shi .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :14004-14013