Evaluation of the impact of physical adversarial attacks on deep learning models for classifying covid cases

被引:4
作者
de Aguiar, Erikson J. [1 ]
Marcomini, Karem D. [1 ]
Quirino, Felipe A. [1 ]
Gutierrez, Marco A. [2 ]
Traina, Caetano, Jr. [1 ]
Traina, Agma J. M. [1 ]
机构
[1] Univ Sao Paulo, Inst Math & Comp Sci, Sao Carlos, Brazil
[2] Univ Sao Paulo, Heart Inst, Clin Hosp, Med Sch, Sao Paulo, Brazil
来源
MEDICAL IMAGING 2022: COMPUTER-AIDED DIAGNOSIS | 2022年 / 12033卷
基金
巴西圣保罗研究基金会;
关键词
Adversarial attacks; deep neural networks; COVID-19; Fast Gradient Sign Method;
D O I
10.1117/12.2611199
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
The SARS-CoV-2 (COVID-19) disease rapidly spread worldwide, thus increasing the need to create new strategies to fight it. Several researchers in different fields have attempted to develop methods to early identifying it and mitigating its effects. The Deep Learning (DL) approach, such as the Convolutional Neural Networks (CNNs), has been increasingly used in COVID-19 diagnoses. These models intend to support decision-making and are doing well to detecting patient status early. Although DL models have good accuracy to support diagnosis, they are vulnerable to Adversarial Attacks. These attacks are new methods to make DL models biased by adding small perturbations on the original image. This paper investigates the impact of Adversarial Attacks on DL models for classifying X-ray images of COVID-19 cases. We focused on the attack Fast Gradient Sign Method (FGSM), which aims to add perturbations to the testing images by combining a perturbation matrix, producing a crafted image. We conduct the experiments analyzing the model's performance attack-free and adding attacks. The following CNNs models were selected: DenseNet201, ResNet-50V2, MobileNetV2, NasNet and VGG16. In the attack-free environment, we reach precision around 99%. When it adds the attack, our results revealed that all models suffer from performance reduction, and the most affected was MobileNet that reduced its ability from 98.61% to 67.73%. However, the VGG16 network showed to be the least affected by the attacks. Our finds describe that DL models for COVID-19 are vulnerable to Adversarial Examples. The FGSM was capable of fooling the model, resulting in a significant reduction in the DL performance.
引用
收藏
页数:7
相关论文
共 14 条
[1]   Basic statistical tools in research and data analysis [J].
Ali, Zulfiqar ;
Bhaskar, S. Bala .
INDIAN JOURNAL OF ANAESTHESIA, 2016, 60 (09) :662-669
[2]  
[Anonymous], 2011, P 4 ACM WORKSHOP SEC, DOI DOI 10.1145/2046684.2046692
[3]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57
[4]  
Chih-Ling Chang, 2020, SPAI '20: Proceedings of the 1st ACM Workshop on Security and Privacy on Artificial Intelligence, P47, DOI 10.1145/3385003.3410920
[5]   Can AI Help in Screening Viral and COVID-19 Pneumonia? [J].
Chowdhury, Muhammad E. H. ;
Rahman, Tawsifur ;
Khandakar, Amith ;
Mazhar, Rashid ;
Kadir, Muhammad Abdul ;
Bin Mahbub, Zaid ;
Islam, Khandakar Reajul ;
Khan, Muhammad Salman ;
Iqbal, Atif ;
Al Emadi, Nasser ;
Reaz, Mamun Bin Ibne ;
Islam, Mohammad Tariqul .
IEEE ACCESS, 2020, 8 :132665-132676
[6]  
Demsar J, 2006, J MACH LEARN RES, V7, P1
[7]   Perceptual Evaluation of Adversarial Attacks for CNN-based Image Classification [J].
Fezza, Sid Ahmed ;
Bakhti, Yassine ;
Hamidouche, Wassim ;
Deforges, Olivier .
2019 ELEVENTH INTERNATIONAL CONFERENCE ON QUALITY OF MULTIMEDIA EXPERIENCE (QOMEX), 2019,
[8]  
Goodfellow I. J., 2015, ICLR
[9]   DEFENDING AGAINST ADVERSARIAL ATTACKS ON MEDICAL IMAGING AI SYSTEM, CLASSIFICATION OR DETECTION? [J].
Li, Xin ;
Pan, Deng ;
Zhu, Dongxiao .
2021 IEEE 18TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI), 2021, :1677-1681
[10]  
Madry A., 2018, ICLR