Evaluating and Improving Adversarial Robustness of Deep Learning Models for Intelligent Vehicle Safety

被引:1
|
作者
Hussain, Manzoor [1 ]
Hong, Jang-Eui [1 ]
机构
[1] Chungbuk Natl Univ, Coll Elect & Comp Engn, Dept Comp Sci, Cheongju 28644, South Korea
基金
新加坡国家研究基金会;
关键词
Perturbation methods; Training; Safety; Prevention and mitigation; Iterative methods; Computational modeling; Accuracy; Transportation; Roads; Adversarial attacks; adversarial defense; autoencoder; deep learning (DL); generative adversarial neural network; robustness; trusted artificial intelligence; ATTACK;
D O I
10.1109/TR.2024.3458805
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Deep learning models have proven their effectiveness in intelligent transportation. However, their vulnerability to adversarial attacks poses significant challenges to traffic safety. Therefore, this article presents a novel technique to evaluate and improve the adversarial robustness of the deep learning models. We first proposed a deep-convolutional-autoencoder-based adversarial attack detector that identifies whether or not the input samples are adversarial. It serves as a preliminary step toward adversarial attack mitigation. Second, we developed a conditional generative adversarial neural network (c-GAN) to transform the adversarial images back to their original form to alleviate the adversarial attacks by restoring the integrity of perturbed images. We present a case study on the traffic sign recognition model to validate our approach. The experimental results showed the effectiveness of the adversarial attack mitigator, achieving an average structure similarity index measure of 0.43 on the learning interpretable skills abstractions (LISA)-convolutional neural network (CNN) dataset and 0.38 on the German traffic sign recognition benchmark (GTSRB)-CNN dataset. While evaluating the peak signal noise ratio, the c-GAN model attains an average of 18.65 on the LISA-CNN and 18.05 on the GTSRB-CNN dataset. Ultimately, the proposed method significantly enhanced the average detection accuracy of adversarial traffic signs, elevating it from 72.66% to 98% on the LISA-CNN dataset. In addition, an average of 28% improvement in accuracy was observed on the GTSRB-CNN.
引用
收藏
页数:15
相关论文
共 50 条
  • [41] On the role of deep learning model complexity in adversarial robustness for medical images
    Rodriguez, David
    Nayak, Tapsya
    Chen, Yidong
    Krishnan, Ram
    Huang, Yufei
    BMC MEDICAL INFORMATICS AND DECISION MAKING, 2022, 22 (SUPPL 2)
  • [42] Camouflage Is All You Need: Evaluating and Enhancing Transformer Models Robustness Against Camouflage Adversarial Attacks
    Huertas-Garcia, Alvaro
    Martin, Alejandro
    Huertas-Tato, Javier
    Camacho, David
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2025, 9 (01): : 431 - 443
  • [43] Addressing Adversarial Attacks in IoT Using Deep Learning AI Models
    Bommana, Sesibhushana Rao
    Veeramachaneni, Sreehari
    Ahmed, Syed Ershad
    Srinivas, M. B.
    IEEE ACCESS, 2025, 13 : 50437 - 50449
  • [44] Evaluating the robustness of deep learning models trained to diagnose idiopathic pulmonary fibrosis using a retrospective study
    Yu, Wenxi
    McNitt-Gray, Michael F.
    Goldin, Jonathan G.
    Song, Jin Woo
    Kim, Grace Hyun J.
    MEDICAL PHYSICS, 2025,
  • [45] Improving Robustness of Deep Transfer Model by Double Transfer Learning
    Yu, Lin
    Wang, Xingda
    Wang, Xiaoping
    Zeng, Zhigang
    2020 12TH INTERNATIONAL CONFERENCE ON ADVANCED COMPUTATIONAL INTELLIGENCE (ICACI), 2020, : 356 - 363
  • [46] Adversarial Robustness of Distilled and Pruned Deep Learning-based Wireless Classifiers
    Baishya, Nayan Moni
    Manoj, B. R.
    2024 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE, WCNC 2024, 2024,
  • [47] Studying the Robustness of Anti-Adversarial Federated Learning Models Detecting Cyberattacks in IoT Spectrum Sensors
    Sanchez, Pedro Miguel Sanchez
    Celdran, Alberto Huertas
    Schenk, Timo
    Iten, Adrian Lars Benjamin
    Bovet, Gerome
    Perez, Gregorio Martinez
    Stiller, Burkhard
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (02) : 573 - 584
  • [48] Not So Robust after All: Evaluating the Robustness of Deep Neural Networks to Unseen Adversarial Attacks
    Garaev, Roman
    Rasheed, Bader
    Khan, Adil Mehmood
    ALGORITHMS, 2024, 17 (04)
  • [49] Improving Robustness for Tag Recommendation via Self-Paced Adversarial Metric Learning
    Fei, Zhengshun
    Chen, Jianxin
    Chen, Gui
    Xiang, Xinjian
    CMC-COMPUTERS MATERIALS & CONTINUA, 2025, 82 (03): : 4237 - 4261
  • [50] Deep learning for autonomous vehicle and pedestrian interaction safety
    Zhu, Zijiang
    Hu, Zhenlong
    Dai, Weihuang
    Chen, Hang
    Lv, Zhihan
    SAFETY SCIENCE, 2022, 145