Evaluating and Improving Adversarial Robustness of Deep Learning Models for Intelligent Vehicle Safety

被引:1
|
作者
Hussain, Manzoor [1 ]
Hong, Jang-Eui [1 ]
机构
[1] Chungbuk Natl Univ, Coll Elect & Comp Engn, Dept Comp Sci, Cheongju 28644, South Korea
基金
新加坡国家研究基金会;
关键词
Perturbation methods; Training; Safety; Prevention and mitigation; Iterative methods; Computational modeling; Accuracy; Transportation; Roads; Adversarial attacks; adversarial defense; autoencoder; deep learning (DL); generative adversarial neural network; robustness; trusted artificial intelligence; ATTACK;
D O I
10.1109/TR.2024.3458805
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Deep learning models have proven their effectiveness in intelligent transportation. However, their vulnerability to adversarial attacks poses significant challenges to traffic safety. Therefore, this article presents a novel technique to evaluate and improve the adversarial robustness of the deep learning models. We first proposed a deep-convolutional-autoencoder-based adversarial attack detector that identifies whether or not the input samples are adversarial. It serves as a preliminary step toward adversarial attack mitigation. Second, we developed a conditional generative adversarial neural network (c-GAN) to transform the adversarial images back to their original form to alleviate the adversarial attacks by restoring the integrity of perturbed images. We present a case study on the traffic sign recognition model to validate our approach. The experimental results showed the effectiveness of the adversarial attack mitigator, achieving an average structure similarity index measure of 0.43 on the learning interpretable skills abstractions (LISA)-convolutional neural network (CNN) dataset and 0.38 on the German traffic sign recognition benchmark (GTSRB)-CNN dataset. While evaluating the peak signal noise ratio, the c-GAN model attains an average of 18.65 on the LISA-CNN and 18.05 on the GTSRB-CNN dataset. Ultimately, the proposed method significantly enhanced the average detection accuracy of adversarial traffic signs, elevating it from 72.66% to 98% on the LISA-CNN dataset. In addition, an average of 28% improvement in accuracy was observed on the GTSRB-CNN.
引用
收藏
页数:15
相关论文
共 50 条
  • [31] Improving the robustness and accuracy of biomedical language models through adversarial training
    Moradi, Milad
    Samwald, Matthias
    JOURNAL OF BIOMEDICAL INFORMATICS, 2022, 132
  • [32] CARLA-GEAR: A Dataset Generator for a Systematic Evaluation of Adversarial Robustness of Deep Learning Vision Models
    Nesti, Federico
    Rossolini, Giulio
    D'Amico, Gianluca
    Biondi, Alessandro
    Buttazzo, Giorgio
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2024, 25 (08) : 9840 - 9851
  • [33] Adversarial Attacks on Deep Learning Models of Computer Vision: A Survey
    Ding, Jia
    Xu, Zhiwu
    ALGORITHMS AND ARCHITECTURES FOR PARALLEL PROCESSING, ICA3PP 2020, PT III, 2020, 12454 : 396 - 408
  • [34] Transferability of Adversarial Attacks on Tiny Deep Learning Models for IoT Unmanned Aerial Vehicles
    Zhou, Shan
    Huang, Xianting
    Obaidat, Mohammad S.
    Alzahrani, Bander A.
    Han, Xuming
    Kumari, Saru
    Chen, Chien-Ming
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (12): : 21037 - 21045
  • [35] Resisting Deep Learning Models Against Adversarial Attack Transferability via Feature Randomization
    Nowroozi, Ehsan
    Mohammadi, Mohammadreza
    Golmohammadi, Pargol
    Mekdad, Yassine
    Conti, Mauro
    Uluagac, Selcuk
    IEEE TRANSACTIONS ON SERVICES COMPUTING, 2024, 17 (01) : 18 - 29
  • [36] A Fast Robustness Quantification Method for Evaluating Typical Deep Learning Models by Generally Image Processing
    Li, Haocong
    Cheng, Yunjia
    Ren, Wei
    Zhu, Tianqing
    2020 IEEE INTL SYMP ON PARALLEL & DISTRIBUTED PROCESSING WITH APPLICATIONS, INTL CONF ON BIG DATA & CLOUD COMPUTING, INTL SYMP SOCIAL COMPUTING & NETWORKING, INTL CONF ON SUSTAINABLE COMPUTING & COMMUNICATIONS (ISPA/BDCLOUD/SOCIALCOM/SUSTAINCOM 2020), 2020, : 110 - 117
  • [37] Adversarial Robustness of Deep Reinforcement Learning Based Dynamic Recommender Systems
    Wang, Siyu
    Cao, Yuanjiang
    Chen, Xiaocong
    Yao, Lina
    Wang, Xianzhi
    Sheng, Quan Z.
    FRONTIERS IN BIG DATA, 2022, 5
  • [38] Feature-Based Adversarial Training for Deep Learning Models Resistant to Transferable Adversarial Examples
    Ryu, Gwonsang
    Choi, Daeseon
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2022, E105D (05) : 1039 - 1049
  • [39] Achieving Adversarial Robustness in Deep Learning-Based Overhead Imaging
    Braun, Dagen
    Reisman, Matthew
    Dewell, Larry
    Banburski-Fahey, Andrzej
    Deza, Arturo
    Poggio, Tomaso
    2022 IEEE APPLIED IMAGERY PATTERN RECOGNITION WORKSHOP, AIPR, 2022,
  • [40] On the role of deep learning model complexity in adversarial robustness for medical images
    David Rodriguez
    Tapsya Nayak
    Yidong Chen
    Ram Krishnan
    Yufei Huang
    BMC Medical Informatics and Decision Making, 22