Adversarial attacks and adversarial training for burn image segmentation based on deep learning

被引:1
作者
Chen, Luying [1 ]
Liang, Jiakai [1 ]
Wang, Chao [1 ]
Yue, Keqiang [1 ]
Li, Wenjun [1 ]
Fu, Zhihui [2 ]
机构
[1] Hangzhou Dianzi Univ, Zhejiang Integrated Circuits & Intelligent Hardwar, Hangzhou 317300, Peoples R China
[2] Zhejiang Univ, Affiliated Hosp 2, Sch Med, Hangzhou 310009, Peoples R China
关键词
Deep learning; Burn images; Adversarial attack; Adversarial training; Image segmentation; CLASSIFICATION; DISEASES; DEPTH;
D O I
10.1007/s11517-024-03098-9
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Deep learning has been widely applied in the fields of image classification and segmentation, while adversarial attacks can impact the model's results in image segmentation and classification. Especially in medical images, due to constraints from factors like shooting angles, environmental lighting, and diverse photography devices, medical images typically contain various forms of noise. In order to address the impact of these physically meaningful disturbances on existing deep learning models in the application of burn image segmentation, we simulate attack methods inspired by natural phenomena and propose an adversarial training approach specifically designed for burn image segmentation. The method is tested on our burn dataset. Through the defensive training using our approach, the segmentation accuracy of adversarial samples, initially at 54%, is elevated to 82.19%, exhibiting a 1.97% improvement compared to conventional adversarial training methods, while substantially reducing the training time. Ablation experiments validate the effectiveness of individual losses, and we assess and compare training results with different adversarial samples using various metrics.
引用
收藏
页码:2717 / 2735
页数:19
相关论文
共 50 条
[21]   Design of robust hyperspectral image classifier based on adversarial training against adversarial attack [J].
Park I. ;
Kim S. .
Journal of Institute of Control, Robotics and Systems, 2021, 27 (06) :389-400
[22]   On the Effectiveness of Adversarial Training in Defending against Adversarial Example Attacks for Image Classification [J].
Park, Sanglee ;
So, Jungmin .
APPLIED SCIENCES-BASEL, 2020, 10 (22) :1-16
[23]   Adversarial Training Against Adversarial Attacks for Machine Learning-Based Intrusion Detection Systems [J].
Haroon, Muhammad Shahzad ;
Ali, Husnain Mansoor .
CMC-COMPUTERS MATERIALS & CONTINUA, 2022, 73 (02) :3513-3527
[24]   Exploring the feasibility of adversarial attacks on medical image segmentation [J].
Shukla, Sneha ;
Gupta, Anup Kumar ;
Gupta, Puneet .
MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (04) :11745-11768
[25]   Adversarial Attacks for Image Segmentation on Multiple Lightweight Models [J].
Kang, Xu ;
Song, Bin ;
Du, Xiaojiang ;
Guizani, Mohsen .
IEEE ACCESS, 2020, 8 :31359-31370
[26]   Evaluating Adversarial Attacks and Defences in Infrared Deep Learning Monitoring Systems [J].
Spasiano, Flaminia ;
Gennaro, Gabriele ;
Scardapane, Simone .
2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
[27]   Exploring the feasibility of adversarial attacks on medical image segmentation [J].
Sneha Shukla ;
Anup Kumar Gupta ;
Puneet Gupta .
Multimedia Tools and Applications, 2024, 83 :11745-11768
[28]   Adversarial Attacks in a Deep Reinforcement Learning based Cluster Scheduler [J].
Zhang, Shaojun ;
Wang, Chen ;
Zomaya, Albert Y. .
2020 IEEE 28TH INTERNATIONAL SYMPOSIUM ON MODELING, ANALYSIS, AND SIMULATION OF COMPUTER AND TELECOMMUNICATION SYSTEMS (MASCOTS 2020), 2020, :1-8
[29]   XSS adversarial example attacks based on deep reinforcement learning [J].
Chen, Li ;
Tang, Cong ;
He, Junjiang ;
Zhao, Hui ;
Lan, Xiaolong ;
Li, Tao .
COMPUTERS & SECURITY, 2022, 120
[30]   ADVERSARIAL ATTACKS ON RADAR TARGET RECOGNITION BASED ON DEEP LEARNING [J].
Zhou, Jie ;
Peng, Bo ;
Peng, Bowen .
2022 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM (IGARSS 2022), 2022, :2646-2649