Adversarial attacks and adversarial training for burn image segmentation based on deep learning

被引:1
作者
Chen, Luying [1 ]
Liang, Jiakai [1 ]
Wang, Chao [1 ]
Yue, Keqiang [1 ]
Li, Wenjun [1 ]
Fu, Zhihui [2 ]
机构
[1] Hangzhou Dianzi Univ, Zhejiang Integrated Circuits & Intelligent Hardwar, Hangzhou 317300, Peoples R China
[2] Zhejiang Univ, Affiliated Hosp 2, Sch Med, Hangzhou 310009, Peoples R China
关键词
Deep learning; Burn images; Adversarial attack; Adversarial training; Image segmentation; CLASSIFICATION; DISEASES; DEPTH;
D O I
10.1007/s11517-024-03098-9
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Deep learning has been widely applied in the fields of image classification and segmentation, while adversarial attacks can impact the model's results in image segmentation and classification. Especially in medical images, due to constraints from factors like shooting angles, environmental lighting, and diverse photography devices, medical images typically contain various forms of noise. In order to address the impact of these physically meaningful disturbances on existing deep learning models in the application of burn image segmentation, we simulate attack methods inspired by natural phenomena and propose an adversarial training approach specifically designed for burn image segmentation. The method is tested on our burn dataset. Through the defensive training using our approach, the segmentation accuracy of adversarial samples, initially at 54%, is elevated to 82.19%, exhibiting a 1.97% improvement compared to conventional adversarial training methods, while substantially reducing the training time. Ablation experiments validate the effectiveness of individual losses, and we assess and compare training results with different adversarial samples using various metrics.
引用
收藏
页码:2717 / 2735
页数:19
相关论文
共 50 条
[31]   Adversarial attacks on YOLACT instance segmentation [J].
Zhang, Zhaoxin ;
Huang, Shize ;
Liu, Xiaowen ;
Zhang, Bingjie ;
Dong, Decun .
COMPUTERS & SECURITY, 2022, 116
[32]   WASSERTRAIN: AN ADVERSARIAL TRAINING FRAMEWORK AGAINST WASSERSTEIN ADVERSARIAL ATTACKS [J].
Zhao, Qingye ;
Chen, Xin ;
Zhao, Zhuoyu ;
Tang, Enyi ;
Li, Xuandong .
2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, :2734-2738
[33]   Defense Against Adversarial Attacks in Deep Learning [J].
Li, Yuancheng ;
Wang, Yimeng .
APPLIED SCIENCES-BASEL, 2019, 9 (01)
[34]   EXPLORING ADVERSARIAL ATTACKS AND DEFENSES IN DEEP LEARNING [J].
Thangaraju, Pajun ;
Merkel, Cory .
2022 IEEE INTERNATIONAL CONFERENCE ON ELECTRONICS, COMPUTING AND COMMUNICATION TECHNOLOGIES, CONECCT, 2022,
[35]   Evasion and Causative Attacks with Adversarial Deep Learning [J].
Shi, Yi ;
Sagduyu, Yalin E. .
MILCOM 2017 - 2017 IEEE MILITARY COMMUNICATIONS CONFERENCE (MILCOM), 2017, :243-248
[36]   AIR: Threats of Adversarial Attacks on Deep Learning-Based Information Recovery [J].
Chen, Jinyin ;
Ge, Jie ;
Zheng, Shilian ;
Ye, Linhui ;
Zheng, Haibin ;
Shen, Weiguo ;
Yue, Keqiang ;
Yang, Xiaoniu .
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2024, 23 (09) :10698-10711
[37]   Adversarial Attacks and Defenses for Deep-Learning-Based Unmanned Aerial Vehicles [J].
Tian, Jiwei ;
Wang, Buhong ;
Guo, Rongxiao ;
Wang, Zhen ;
Cao, Kunrui ;
Wang, Xiaodong .
IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (22) :22399-22409
[38]   Robust Adversarial Attacks on Deep Learning-Based RF Fingerprint Identification [J].
Liu, Boyang ;
Zhang, Haoran ;
Wan, Yiyao ;
Zhou, Fuhui ;
Wu, Qihui ;
Ng, Derrick Wing Kwan .
IEEE WIRELESS COMMUNICATIONS LETTERS, 2023, 12 (06) :1037-1041
[39]   ADVERSARIAL ATTACKS & DETECTION ON A DEEP LEARNING-BASED DIGITAL PATHOLOGY MODEL [J].
Vali, Eleanna ;
Alexandridis, Georgios ;
Stafylopatis, Andreas .
2023 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING WORKSHOPS, ICASSPW, 2023,
[40]   Adversarial Attacks on Deep Learning-Based Methods for Network Traffic Classification [J].
Li, Meimei ;
Xu, Yiyan ;
Li, Nan ;
Jin, Zhongfeng .
2022 IEEE INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS, TRUSTCOM, 2022, :1123-1128