A fuzzy-based frame transformation to mitigate the impact of adversarial attacks in deep learning-based real-time video surveillance systems

被引:0
作者
Ul Haque, Sheikh Burhan [1 ]
机构
[1] Bennett Univ, Sch Comp Sci Engn & Technol, Greater Noida, UP, India
关键词
Adversarial attacks; Deep learning; Fuzzy sets; Smart city; Surveillance systems; IoT;
D O I
10.1016/j.asoc.2024.112440
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep learning (DL) techniques have become integral to smart city projects, including video surveillance systems (VSS). These advanced technologies offer significant benefits, such as enhanced accuracy and efficiency in monitoring and managing urban environments. However, despite their advantages, these systems are not without vulnerabilities. One of the most pressing challenges is their susceptibility to adversarial attacks, which can lead to critical misclassifications during inference. To address these challenges, our research focuses on developing a more robust smart city VSS. Our research unfolds across two pivotal initiatives. In our initial exploration, we introduce a pioneering framework that extends the reach of adversarial attacks to real-time VSS. A practical manifestation involved implementing a real-time face mask surveillance system based on Multi-Task Cascaded Convolutional Networks (MTCNN) for face detection and MobileNet-v2 for face mask classification, subjecting it to the Fast Gradient Sign Method (FGSM) adversarial attack in real-time. In our subsequent endeavor, we propose a sophisticated defense mechanism deploying Fuzzy Image Transformation as a pre-processing unit (FITP). This strategic defense fortification significantly reinforces our real-time VSS against adversarial intrusions. Experimental findings highlight the effectiveness of the proposed adversarial attack framework in real-time, resulting in a marked reduction in the model's performance from a precision (P) of 93 %, recall (R) of 93 %, F1 score (F) of 93 %, and accuracy (A) of 93-22 %, 21 %, 22 %, and 22 %, respectively. However, the post-implementation efficacy of our defense mechanism is striking, enhancing the model's average performance to a noteworthy improvement, with P, R, F, and A ascending to 91 %, 90%, 91 %, and 91 %. This research illuminates the vulnerabilities intrinsic to VSS in the face of adversarial threats, underscoring the critical need for heightened awareness and the development of robust defense mechanisms before real-world deployment.
引用
收藏
页数:27
相关论文
共 47 条
[1]   Deep learning-enabled anomaly detection for IoT systems [J].
Abusitta, Adel ;
de Carvalho, Glaucio H. S. ;
Wahab, Omar Abdel ;
Halabi, Talal ;
Fung, Benjamin C. M. ;
Al Mamoori, Saja .
INTERNET OF THINGS, 2023, 21
[2]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57
[3]  
Chen PY, 2018, AAAI CONF ARTIF INTE, P10
[4]  
Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
[5]   AI-Based human audio processing for COVID-19: A comprehensive overview [J].
Deshpande, Gauri ;
Batliner, Anton ;
Schuller, Bjoern W. .
PATTERN RECOGNITION, 2022, 122
[6]  
Howard AG, 2017, Arxiv, DOI [arXiv:1704.04861, DOI 10.48550/ARXIV.1704.04861]
[7]   Black-box Generation of Adversarial Text Sequences to Evade Deep Learning Classifiers [J].
Gao, Ji ;
Lanchantin, Jack ;
Soffa, Mary Lou ;
Qi, Yanjun .
2018 IEEE SYMPOSIUM ON SECURITY AND PRIVACY WORKSHOPS (SPW 2018), 2018, :50-56
[8]  
Goodfellow Ian J, 2015, EXPLAINING HARNESSIN, DOI DOI 10.48550/ARXIV.1412.6572
[9]  
Gougeh R.A., 2021, Research Square, DOI [10.21203/rs.3.rs-763355/v1, DOI 10.21203/RS.3.RS-763355/V1]
[10]   Towards a resource efficient and privacy-preserving framework for campus-wide video analytics-based applications [J].
Gupta, Ankur ;
Prabhat, Purnendu .
COMPLEX & INTELLIGENT SYSTEMS, 2023, 9 (01) :161-176