Adversarial attacks and adversarial training for burn image segmentation based on deep learning

被引:1
作者
Chen, Luying [1 ]
Liang, Jiakai [1 ]
Wang, Chao [1 ]
Yue, Keqiang [1 ]
Li, Wenjun [1 ]
Fu, Zhihui [2 ]
机构
[1] Hangzhou Dianzi Univ, Zhejiang Integrated Circuits & Intelligent Hardwar, Hangzhou 317300, Peoples R China
[2] Zhejiang Univ, Affiliated Hosp 2, Sch Med, Hangzhou 310009, Peoples R China
关键词
Deep learning; Burn images; Adversarial attack; Adversarial training; Image segmentation; CLASSIFICATION; DISEASES; DEPTH;
D O I
10.1007/s11517-024-03098-9
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Deep learning has been widely applied in the fields of image classification and segmentation, while adversarial attacks can impact the model's results in image segmentation and classification. Especially in medical images, due to constraints from factors like shooting angles, environmental lighting, and diverse photography devices, medical images typically contain various forms of noise. In order to address the impact of these physically meaningful disturbances on existing deep learning models in the application of burn image segmentation, we simulate attack methods inspired by natural phenomena and propose an adversarial training approach specifically designed for burn image segmentation. The method is tested on our burn dataset. Through the defensive training using our approach, the segmentation accuracy of adversarial samples, initially at 54%, is elevated to 82.19%, exhibiting a 1.97% improvement compared to conventional adversarial training methods, while substantially reducing the training time. Ablation experiments validate the effectiveness of individual losses, and we assess and compare training results with different adversarial samples using various metrics.
引用
收藏
页码:2717 / 2735
页数:19
相关论文
共 50 条
[41]   Transcend Adversarial Examples: Diversified Adversarial Attacks to Test Deep Learning Model [J].
Kong, Wei .
2023 IEEE 41ST INTERNATIONAL CONFERENCE ON COMPUTER DESIGN, ICCD, 2023, :13-20
[42]   Defending Against Adversarial Fingerprint Attacks Based on Deep Image Prior [J].
Yoo, Hwajung ;
Hong, Pyo Min ;
Kim, Taeyong ;
Yoon, Jung Won ;
Lee, Youn Kyu .
IEEE ACCESS, 2023, 11 :78713-78725
[43]   Adversarial Attacks on Featureless Deep Learning Malicious URLs Detection [J].
Rasheed, Bader ;
Khan, Adil ;
Kazmi, S. M. Ahsan ;
Hussain, Rasheed ;
Piran, Md Jalil ;
Suh, Doug Young .
CMC-COMPUTERS MATERIALS & CONTINUA, 2021, 68 (01) :921-939
[44]   Adversarial examples: attacks and defences on medical deep learning systems [J].
Murali Krishna Puttagunta ;
S. Ravi ;
C Nelson Kennedy Babu .
Multimedia Tools and Applications, 2023, 82 :33773-33809
[45]   Robust Graph Neural Networks Against Adversarial Attacks via Jointly Adversarial Training [J].
Tian, Hu ;
Ye, Bowei ;
Zheng, Xiaolong ;
Wu, Desheng Dash .
IFAC PAPERSONLINE, 2020, 53 (05) :420-425
[46]   Adversarial examples: attacks and defences on medical deep learning systems [J].
Puttagunta, Murali Krishna ;
Ravi, S. ;
Babu, C. Nelson Kennedy .
MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (22) :33773-33809
[47]   A hybrid adversarial training for deep learning model and denoising network resistant to adversarial examples [J].
Gwonsang Ryu ;
Daeseon Choi .
Applied Intelligence, 2023, 53 :9174-9187
[48]   A hybrid adversarial training for deep learning model and denoising network resistant to adversarial examples [J].
Ryu, Gwonsang ;
Choi, Daeseon .
APPLIED INTELLIGENCE, 2023, 53 (08) :9174-9187
[49]   Adversarial Attacks on Deep-Learning Based Radio Signal Classification [J].
Sadeghi, Meysam ;
Larsson, Erik G. .
IEEE WIRELESS COMMUNICATIONS LETTERS, 2019, 8 (01) :213-216
[50]   Adversarial Attacks on SDN-Based Deep Learning IDS System [J].
Huang, Chi-Hsuan ;
Lee, Tsung-Han ;
Chang, Lin-Huang ;
Lin, Jhih-Ren ;
Horng, Gwoboa .
MOBILE AND WIRELESS TECHNOLOGY 2018, ICMWT 2018, 2019, 513 :181-191