Adversarial robustness enhancement in deep learning-based breast cancer classification: A multi-faceted approach to poisoning and Evasion attack mitigation

被引:1
作者
Doss, P. Lourdu Mahimai [1 ]
Gunasekaran, Muthumanickam [1 ]
Kim, Jungeun [2 ]
Kadry, Seifedine [3 ,4 ]
机构
[1] Saveetha Univ, Saveetha Inst Med & Tech Sci, Saveetha Sch Engn, Dept Comp Sci & Engn, Chennai, India
[2] Inha Univ, Dept Comp Engn, Incheon 22212, South Korea
[3] Noroff Univ Coll, Dept Appl Data Sci, Kristiansand, Norway
[4] Lebanese Amer Univ, Dept Elect & Comp Engn, Byblos, Lebanon
基金
新加坡国家研究基金会;
关键词
Adversarial robustness; Stochastic Gradient Descent with Momentum; Poisoning attack; Evasion attack; Feature-space poison injection; Dynamic Layer-wise Weighting; Adaptive Denoising Layers;
D O I
10.1016/j.aej.2024.11.089
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Deep learning models used in medical image classification continue to be vulnerable to adversarial attacks, particularly in the case of Invasive Ductal Carcinoma (IDC). The proposed attacks will negatively impact the integrity and reliability of the model. This work optimizes Convolutional Neural Networks (CNN) used for IDC classification. A competitive CNN designed and trained on the IDC dataset using Stochastic Gradient Descent with Momentum (SGD) as the optimizer achieved a training accuracy of 99 % and a testing accuracy of 80 %. The paper evaluates the extent to which this model is susceptible to adversarial manipulation, notably Poison and Evasion attacks. The research reveals that poisonous attacks, notably those of the Layer-wise Model Distortion (LMD) framework with feature-space poison injection, resulted in the model achieving an accuracy of 66 %. Evasion attacks using the Fast Gradient Sign Method (FGSM) under the LMD framework led to an accuracy of 92 %. To bridge the discussed gaps, new defense techniques have been proposed and tested using Layer-wise Robustness Enhancement (LRF). Defense techniques involved dynamic layer-wise weighting, leading overall accuracies against poison attacks to surge to 76 %, and adaptive denoising to lead overall accuracies against evasion attacks to 79 %. This study discussed the seminal issue of adversarial manipulation in medical picture classification and how some defenses are justified in the LRF framework to substantially improve the model's resiliency, integrity, and trust.
引用
收藏
页码:65 / 82
页数:18
相关论文
共 48 条
[1]  
2024, International Research Journal of Modernization in Engineering Technology and Science, DOI [10.56726/irjmets56206, 10.56726/irjmets48504, DOI 10.56726/IRJMETS48504, 10.56726/irjmets47203]
[2]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57
[3]   Defensive Distillation-Based Adversarial Attack Mitigation Method for Channel Estimation Using Deep Learning Models in Next-Generation Wireless Networks [J].
Catak, Ferhat Ozgur ;
Kuzlu, Murat ;
Catak, Evren ;
Cali, Umit ;
Guler, Ozgur .
IEEE ACCESS, 2022, 10 :98191-98203
[4]   Data filtering for efficient adversarial training [J].
Chen, Erh-Chung ;
Lee, Che-Rung .
PATTERN RECOGNITION, 2024, 151
[5]   Gear fault diagnosis based on SGMD noise reduction and CNN [J].
Chen, Wei ;
Wang, Hao ;
Li, Zhuoxian ;
Zhou, Zhexin .
JOURNAL OF ADVANCED MECHANICAL DESIGN SYSTEMS AND MANUFACTURING, 2022, 16 (03) :1-12
[6]  
Deng ZH, 2024, Highlights in Science Engineering and Technology, V81, P243, DOI [10.54097/vyfg4e34, 10.54097/vyfg4e34, DOI 10.54097/VYFG4E34]
[7]   Deep learning based tomosynthesis denoising: a bias investigation across different breast types [J].
Eckert, Dominik ;
Wicklein, Julia ;
Herbst, Magdalena ;
Dwars, Stephan ;
Ritschl, Ludwig ;
Kappler, Steffen ;
Stober, Sebastian .
JOURNAL OF MEDICAL IMAGING, 2023, 10 (06)
[8]   Cervical Cancer Diagnostics Healthcare System Using Hybrid Object Detection Adversarial Networks [J].
Elakkiya, R. ;
Subramaniyaswamy, V. ;
Vijayakumar, V. ;
Mahanti, Aniket .
IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2022, 26 (04) :1464-1471
[9]   MBA: Backdoor Attacks Against 3D Mesh Classifier [J].
Fan, Linkun ;
He, Fazhi ;
Si, Tongzhen ;
Fan, Rubin ;
Ye, Chuanlong ;
Li, Bing .
IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 :2127-2142
[10]  
Feng X, 2024, IEEE T SERV COMPUT, V17, P1480, DOI [10.1109/TSC.2024.3376255, 10.1145/3704198.3704199]