Frequency constraint-based adversarial attack on deep neural networks for medical image classification

被引:6
作者
Chen, Fang [1 ,2 ]
Wang, Jian [1 ,2 ]
Liu, Han [5 ]
Kong, Wentao [5 ]
Zhao, Zhe [3 ]
Ma, Longfei [4 ]
Liao, Hongen [4 ]
Zhang, Daoqiang [1 ,2 ]
机构
[1] Nanjing Univ Aeronaut & Astronaut, Key Lab Brain Machine Intelligence Technol, Minist Educ, Nanjing, Peoples R China
[2] Nanjing Univ Aeronaut & Astronaut, Coll Comp Sci & Technol, Nanjing, Peoples R China
[3] Tsinghua Univ, Beijing Tsinghua Changgung Hosp, Dept Orthopaed, Beijing 102218, Peoples R China
[4] Tsinghua Univ, Sch Med, Dept Biomed Engn, Beijing, Peoples R China
[5] Nanjing Univ, Affiliated Drum Tower Hosp, Dept Ultrasound, Med Sch, Nanjing 21008, Peoples R China
关键词
Adversarial attack; Frequency constraint; Medical diagnosis; Perturbation; ROBUSTNESS;
D O I
10.1016/j.compbiomed.2023.107248
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
The security of AI systems has gained significant attention in recent years, particularly in the medical diagnosis field. To develop a secure medical image classification system based on deep neural networks, it is crucial to design effective adversarial attacks that can embed hidden, malicious behaviors into the system. However, designing a unified attack method that can generate imperceptible attack samples with high content similarity and be applied to diverse medical image classification systems is challenging due to the diversity of medical imaging modalities and dimensionalities. Most existing attack methods are designed to attack natural image classification models, which inevitably corrupt the semantics of pixels by applying spatial perturbations. To address this issue, we propose a novel frequency constraint-based adversarial attack method capable of delivering attacks in various medical image classification tasks. Specially, our method introduces a frequency constraint to inject perturbation into high-frequency information while preserving low-frequency information to ensure content similarity. Our experiments include four public medical image datasets, including a 3D CT dataset, a 2D chest X-Ray image dataset, a 2D breast ultrasound dataset, and a 2D thyroid ultrasound dataset, which contain different imaging modalities and dimensionalities. The results demonstrate the superior performance of our model over other state-of-the-art adversarial attack methods for attacking medical image classification tasks on different imaging modalities and dimensionalities.
引用
收藏
页数:11
相关论文
共 55 条
  • [1] Classification of COVID-19 in chest X-ray images using DeTraC deep convolutional neural network
    Abbas, Asmaa
    Abdelsamea, Mohammed M.
    Gaber, Mohamed Medhat
    [J]. APPLIED INTELLIGENCE, 2021, 51 (02) : 854 - 864
  • [2] Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey
    Akhtar, Naveed
    Mian, Ajmal
    [J]. IEEE ACCESS, 2018, 6 : 14410 - 14430
  • [3] Dataset of breast ultrasound images
    Al-Dhabyani, Walid
    Gomaa, Mohammed
    Khaled, Hussien
    Fahmy, Aly
    [J]. DATA IN BRIEF, 2020, 28
  • [4] A Survey on Adversarial Deep Learning Robustness in Medical Image Analysis
    Apostolidis, Kyriakos D.
    Papakostas, George A.
    [J]. ELECTRONICS, 2021, 10 (17)
  • [5] Brendel W, 2018, Arxiv, DOI [arXiv:1712.04248, 10.48550/ARXIV.1712.04248]
  • [6] Towards Evaluating the Robustness of Neural Networks
    Carlini, Nicholas
    Wagner, David
    [J]. 2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, : 39 - 57
  • [7] Amplitude-Phase Recombination: Rethinking Robustness of Convolutional Neural Networks in Frequency Domain
    Chen, Guangyao
    Peng, Peixi
    Ma, Li
    Li, Jia
    Du, Lin
    Tian, Yonghong
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 448 - 457
  • [8] Chen P.-Y., 2017, P 10 ACM WORKSH ART, P15, DOI DOI 10.1145/3128572.3140448
  • [9] Can AI Help in Screening Viral and COVID-19 Pneumonia?
    Chowdhury, Muhammad E. H.
    Rahman, Tawsifur
    Khandakar, Amith
    Mazhar, Rashid
    Kadir, Muhammad Abdul
    Bin Mahbub, Zaid
    Islam, Khandakar Reajul
    Khan, Muhammad Salman
    Iqbal, Atif
    Al Emadi, Nasser
    Reaz, Mamun Bin Ibne
    Islam, Mohammad Tariqul
    [J]. IEEE ACCESS, 2020, 8 : 132665 - 132676
  • [10] Frequency-Tuned Universal Adversarial Attacks on Texture Recognition
    Deng, Yingpeng
    Karam, Lina J.
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 : 5856 - 5868