Evaluating and Improving Adversarial Robustness of Deep Learning Models for Intelligent Vehicle Safety

被引:1
|
作者
Hussain, Manzoor [1 ]
Hong, Jang-Eui [1 ]
机构
[1] Chungbuk Natl Univ, Coll Elect & Comp Engn, Dept Comp Sci, Cheongju 28644, South Korea
基金
新加坡国家研究基金会;
关键词
Perturbation methods; Training; Safety; Prevention and mitigation; Iterative methods; Computational modeling; Accuracy; Transportation; Roads; Adversarial attacks; adversarial defense; autoencoder; deep learning (DL); generative adversarial neural network; robustness; trusted artificial intelligence; ATTACK;
D O I
10.1109/TR.2024.3458805
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Deep learning models have proven their effectiveness in intelligent transportation. However, their vulnerability to adversarial attacks poses significant challenges to traffic safety. Therefore, this article presents a novel technique to evaluate and improve the adversarial robustness of the deep learning models. We first proposed a deep-convolutional-autoencoder-based adversarial attack detector that identifies whether or not the input samples are adversarial. It serves as a preliminary step toward adversarial attack mitigation. Second, we developed a conditional generative adversarial neural network (c-GAN) to transform the adversarial images back to their original form to alleviate the adversarial attacks by restoring the integrity of perturbed images. We present a case study on the traffic sign recognition model to validate our approach. The experimental results showed the effectiveness of the adversarial attack mitigator, achieving an average structure similarity index measure of 0.43 on the learning interpretable skills abstractions (LISA)-convolutional neural network (CNN) dataset and 0.38 on the German traffic sign recognition benchmark (GTSRB)-CNN dataset. While evaluating the peak signal noise ratio, the c-GAN model attains an average of 18.65 on the LISA-CNN and 18.05 on the GTSRB-CNN dataset. Ultimately, the proposed method significantly enhanced the average detection accuracy of adversarial traffic signs, elevating it from 72.66% to 98% on the LISA-CNN dataset. In addition, an average of 28% improvement in accuracy was observed on the GTSRB-CNN.
引用
收藏
页数:15
相关论文
共 50 条
  • [21] Detecting Adversarial Samples for Deep Learning Models: A Comparative Study
    Zhang, Shigeng
    Chen, Shuxin
    Liu, Xuan
    Hua, Chengyao
    Wang, Weiping
    Chen, Kai
    Zhang, Jian
    Wang, Jianxin
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2022, 9 (01): : 231 - 244
  • [22] IMPROVING ROBUSTNESS OF DEEP NETWORKS USING CLUSTER-BASED ADVERSARIAL TRAINING
    Rasheed, Bader
    Khan, Adil
    RUSSIAN LAW JOURNAL, 2023, 11 (09) : 412 - 420
  • [23] Adversarial Deep Reinforcement Learning for Improving the Robustness of Multi-agent Autonomous Driving Policies
    Sharif, Aizaz
    Marijan, Dusica
    2022 29TH ASIA-PACIFIC SOFTWARE ENGINEERING CONFERENCE, APSEC, 2022, : 61 - 70
  • [24] Adversarial Robustness in Deep Learning: From Practices to Theories
    Xu, Han
    Li, Yaxin
    Liu, Xiaorui
    Wang, Wentao
    Tang, Jiliang
    KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, : 4086 - 4087
  • [25] Analyzing the Robustness of Deep Learning Against Adversarial Examples
    Zhao, Jun
    2018 56TH ANNUAL ALLERTON CONFERENCE ON COMMUNICATION, CONTROL, AND COMPUTING (ALLERTON), 2018, : 1060 - 1064
  • [26] A system for Evaluating the Robustness of Embedded Intelligent Chips and Models
    Wang, Chenguang
    Sun, Zhixiao
    Luo, Qing
    Wang, Xinyu
    Zhang, Tao
    Wei, QianRu
    Cheng, Jing
    Gao, Depeng
    2021 21ST INTERNATIONAL CONFERENCE ON SOFTWARE QUALITY, RELIABILITY AND SECURITY COMPANION (QRS-C 2021), 2021, : 298 - 305
  • [27] A novel method for improving the robustness of deep learning-based malware detectors against adversarial attacks
    Shaukat, Kamran
    Luo, Suhuai
    Varadharajan, Vijay
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2022, 116
  • [28] A study of natural robustness of deep reinforcement learning algorithms towards adversarial perturbations
    Liu, Qisai
    Lee, Xian Yeow
    Sarkar, Soumik
    AI OPEN, 2024, 5 : 126 - 141
  • [29] A Survey on Adversarial Deep Learning Robustness in Medical Image Analysis
    Apostolidis, Kyriakos D.
    Papakostas, George A.
    ELECTRONICS, 2021, 10 (17)
  • [30] Safety and Robustness for Deep Learning with Provable Guarantees
    Kwiatkowska, Marta
    2020 35TH IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING (ASE 2020), 2020, : 1 - 3