A Feature Space-Restricted Attention Attack on Medical Deep Learning Systems

被引:12
作者
Wang, Zizhou [1 ]
Shu, Xin [1 ]
Wang, Yan [2 ]
Feng, Yangqin [2 ]
Zhang, Lei [1 ]
Yi, Zhang [1 ]
机构
[1] Sichuan Univ, Coll Comp Sci, Chengdu 610065, Peoples R China
[2] Agcy Sci Technol & Res, Inst High Performance Comp, Singapore 138632, Singapore
基金
中国国家自然科学基金;
关键词
Adversarial attack; deep learning; healthcare security; medical image analysis; ROBUSTNESS;
D O I
10.1109/TCYB.2022.3209175
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep neural network has shown a powerful performance in the medical image analysis of a variety of diseases. However, a number of studies over the past few years have demonstrated that these deep learning systems can be vulnerable to well-designed adversarial attacks, with minor disruptions added to the input. Since both the public and academia have focused on deep learning in the health information economy, these adversarial attacks would prove more important and raise security concerns. In this article, adversarial attacks on deep learning systems in medicine are analyzed from two different points of view: 1) white box and 2) black box. A fast adversarial sample generation method, Feature Space-Restricted Attention Attack is proposed to explore more confusing adversarial samples. It is based on a generative adversarial network with bound classification space to generate perturbations to achieve attacks. Meanwhile, it can employ an attention mechanism to focus this perturbation on the lesion region. This enables the perturbation closely associated with the classification information making the attack more efficient and invisible. The performance and specificity of the proposed attack method are demonstrated by conducting extensive experiments on three different types of medical images. Finally, it is expected that this work can assist practitioners become being of current weaknesses in the deployment of deep learning systems in clinical settings. And, it further investigates domain-specific features of medical deep learning systems to enhance model generalization and resistance to attacks.
引用
收藏
页码:5323 / 5335
页数:13
相关论文
共 49 条
  • [1] Andriushchenko Maksym, 2020, Computer Vision - ECCV 2020. 16th European Conference. Proceedings. Lecture Notes in Computer Science (LNCS 12368), P484, DOI 10.1007/978-3-030-58592-1_29
  • [2] APTOS, 2019, Blindness Detection
  • [3] Arjovsky M, 2017, PR MACH LEARN RES, V70
  • [4] Athalye A, 2018, PR MACH LEARN RES, V80
  • [5] Wild patterns: Ten years after the rise of adversarial machine learning
    Biggio, Battista
    Roli, Fabio
    [J]. PATTERN RECOGNITION, 2018, 84 : 317 - 331
  • [6] AUTO-ASSOCIATION BY MULTILAYER PERCEPTRONS AND SINGULAR VALUE DECOMPOSITION
    BOURLARD, H
    KAMP, Y
    [J]. BIOLOGICAL CYBERNETICS, 1988, 59 (4-5) : 291 - 294
  • [7] Towards Evaluating the Robustness of Neural Networks
    Carlini, Nicholas
    Wagner, David
    [J]. 2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, : 39 - 57
  • [8] Croce F, 2020, PR MACH LEARN RES, V119
  • [9] Adversarial attacks on medical machine learning
    Finlayson, Samuel G.
    Bowers, John D.
    Ito, Joichi
    Zittrain, Jonathan L.
    Beam, Andrew L.
    Kohane, Isaac S.
    [J]. SCIENCE, 2019, 363 (6433) : 1287 - 1289
  • [10] Finlayson SG, 2019, Arxiv, DOI arXiv:1804.05296