Systematic Poisoning Attacks on and Defenses for Machine Learning in Healthcare

被引:160
|
作者
Mozaffari-Kermani, Mehran [1 ]
Sur-Kolay, Susmita [2 ]
Raghunathan, Anand [3 ]
Jha, Niraj K. [4 ]
机构
[1] Rochester Inst Technol, Dept Elect & Microelect Engn, Rochester, NY 14623 USA
[2] Indian Stat Inst, Adv Comp & Microelect Unit, Kolkata 700108, India
[3] Purdue Univ, Sch Elect & Comp Engn, W Lafayette, IN 47907 USA
[4] Princeton Univ, Dept Elect Engn, Princeton, NJ 08544 USA
基金
美国国家科学基金会;
关键词
Healthcare; machine learning; poisoning attacks; security; CLASSIFICATION; SECURITY; RULES;
D O I
10.1109/JBHI.2014.2344095
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Machine learning is being used in a wide range of application domains to discover patterns in large datasets. Increasingly, the results of machine learning drive critical decisions in applications related to healthcare and biomedicine. Such health-related applications are often sensitive, and thus, any security breach would be catastrophic. Naturally, the integrity of the results computed by machine learning is of great importance. Recent research has shown that some machine-learning algorithms can be compromised by augmenting their training datasets with malicious data, leading to a new class of attacks called poisoning attacks. Hindrance of a diagnosis may have life-threatening consequences and could cause distrust. On the other hand, not only may a false diagnosis prompt users to distrust the machine-learning algorithm and even abandon the entire system but also such a false positive classification may cause patient distress. In this paper, we present a systematic, algorithm-independent approach for mounting poisoning attacks across a wide range of machine-learning algorithms and healthcare datasets. The proposed attack procedure generates input data, which, when added to the training set, can either cause the results of machine learning to have targeted errors (e.g., increase the likelihood of classification into a specific class), or simply introduce arbitrary errors ( incorrect classification). These attacks may be applied to both fixed and evolving datasets. They can be applied even when only statistics of the training dataset are available or, in some cases, even without access to the training dataset, although at a lower efficacy. We establish the effectiveness of the proposed attacks using a suite of six machine-learning algorithms and five healthcare datasets. Finally, we present countermeasures against the proposed generic attacks that are based on tracking and detecting deviations in various accuracy metrics, and benchmark their effectiveness.
引用
收藏
页码:1893 / 1905
页数:13
相关论文
共 50 条
  • [1] Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
    Goldblum, Micah
    Tsipras, Dimitris
    Xie, Chulin
    Chen, Xinyun
    Schwarzschild, Avi
    Song, Dawn
    Madry, Aleksander
    Li, Bo
    Goldstein, Tom
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (02) : 1563 - 1580
  • [2] Threats to Training: A Survey of Poisoning Attacks and Defenses on Machine Learning Systems
    Wang, Zhibo
    Ma, Jingjing
    Wang, Xue
    Hu, Jiahui
    Qin, Zhan
    Ren, Kui
    ACM COMPUTING SURVEYS, 2023, 55 (07)
  • [3] Automated poisoning attacks and defenses in malware detection systems: An adversarial machine learning approach
    Chen, Sen
    Xue, Minhui
    Fan, Lingling
    Hao, Shuang
    Xu, Lihua
    Zhu, Haojin
    Li, Bo
    COMPUTERS & SECURITY, 2018, 73 : 326 - 344
  • [4] Certified Defenses for Data Poisoning Attacks
    Steinhardt, Jacob
    Koh, Pang Wei
    Liang, Percy
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [5] Poisoning Attacks in Federated Learning: A Survey
    Xia, Geming
    Chen, Jian
    Yu, Chaodong
    Ma, Jun
    IEEE ACCESS, 2023, 11 : 10708 - 10722
  • [6] Model Extraction Attacks and Defenses on Cloud-Based Machine Learning Models
    Gong, Xueluan
    Wang, Qian
    Chen, Yanjiao
    Yang, Wang
    Jiang, Xinchang
    IEEE COMMUNICATIONS MAGAZINE, 2020, 58 (12) : 83 - 89
  • [7] Survey on Privacy Attacks and Defenses in Machine Learning
    Liu R.-X.
    Chen H.
    Guo R.-Y.
    Zhao D.
    Liang W.-J.
    Li C.-P.
    Chen, Hong (chong@ruc.edu.cn), 1600, Chinese Academy of Sciences (31): : 866 - 892
  • [8] Poisoning Attacks on Fair Machine Learning
    Minh-Hao Van
    Du, Wei
    Wu, Xintao
    Lu, Aidong
    DATABASE SYSTEMS FOR ADVANCED APPLICATIONS, DASFAA 2022, PT I, 2022, : 370 - 386
  • [9] A review on client-server attacks and defenses in federated learning
    Sharma, Anee
    Marchang, Ningrinla
    COMPUTERS & SECURITY, 2024, 140
  • [10] Attacks and Defenses towards Machine Learning Based Systems
    Yu, Yingchao
    Liu, Xueyong
    Chen, Zuoning
    PROCEEDINGS OF THE 2ND INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND APPLICATION ENGINEERING (CSAE2018), 2018,