Adaptive Video Anomaly Detection by Attention-Based Relational Knowledge Distillation

被引:0
作者
Asal, Burcak [1 ]
Can, Ahmet Burak [2 ]
机构
[1] Adana Alparslan Turkes Sci & Technol Univ, Dept Comp Engn, TR-01250 Adana, Turkiye
[2] Hacettepe Univ, Dept Comp Engn, TR-06800 Ankara, Turkiye
关键词
Anomaly detection; Adaptation models; Training; Data models; Feature extraction; Deep learning; Weak supervision; Long short term memory; Noise; Knowledge engineering; AR-Net; computer vision; GCN; knowledge distillation; relational approaches; video anomaly detection; weak supervision;
D O I
10.1109/ACCESS.2025.3585984
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Detecting anomaly patterns in videos is a challenging task due to complex scenes, huge diversity of anomalies, and fuzzy nature of the task. With advent of technology, tremendous size of visual data is being generated by video surveillance systems, which makes harder to search, analyze, and detect anomalies on video data by human operators. In this paper, we introduce three relational distillation approaches to handle both robust detection of anomalous events and gradual adaptation to different anomaly patterns in new videos while not forgetting anomaly patterns learned from the previous video data. In order to realize these concepts, we propose a unique attention mechanism with feature and relation based knowledge distillation methods. We adapted our knowledge distillation methods to two state-of-the-art models designed for anomaly detection task. Our extensive experiments on two public datasets show that not only our best version model achieves robust performance with a frame-level AUC of 80.22 on UCF-Crime and video-level AUC of 78.20 on RWF-2000 datasets but also the proposed distillation methods improve the performance while reducing catastrophic forgetting problem.
引用
收藏
页码:117170 / 117185
页数:16
相关论文
共 58 条
[1]  
Asal B., 2024, Ph.D. dissertation
[2]   Ensemble-Based Knowledge Distillation for Video Anomaly Detection [J].
Asal, Burcak ;
Can, Ahmet Burak .
APPLIED SCIENCES-BASEL, 2024, 14 (03)
[3]   Knowledge distillation: A good teacher is patient and consistent [J].
Beyer, Lucas ;
Zhai, Xiaohua ;
Royer, Amelie ;
Markeeva, Larisa ;
Anil, Rohan ;
Kolesnikov, Alexander .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, :10915-10924
[4]   Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset [J].
Carreira, Joao ;
Zisserman, Andrew .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :4724-4733
[5]  
Chalapathy R, 2019, Arxiv, DOI arXiv:1901.03407
[6]   Distilling Knowledge via Knowledge Review [J].
Chen, Pengguang ;
Liu, Shu ;
Zhao, Hengshuang ;
Jia, Jiaya .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :5006-5015
[7]   RWF-2000: An Open Large Scale Video Database for Violence Detection [J].
Cheng, Ming ;
Cai, Kunjing ;
Li, Ming .
2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, :4183-4190
[8]   On the Efficacy of Knowledge Distillation [J].
Cho, Jang Hyun ;
Hariharan, Bharath .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :4793-4801
[9]   Abnormal Event Detection in Videos Using Spatiotemporal Autoencoder [J].
Chong, Yong Shean ;
Tay, Yong Haur .
ADVANCES IN NEURAL NETWORKS, PT II, 2017, 10262 :189-196
[10]   A tucker decomposition based knowledge distillation for intelligent edge applications [J].
Dai, Cheng ;
Liu, Xingang ;
Li, Zhuolin ;
Chen, Mu-Yen .
APPLIED SOFT COMPUTING, 2021, 101