Ethics of Adversarial Machine Learning and Data Poisoning

被引:0
|
作者
Laurynas Adomaitis
Rajvardhan Oak
机构
[1] CEA-Saclay/Larsim,Department of Computer Science
[2] NordVPN S. A,undefined
[3] Microsoft,undefined
[4] University of California Davis,undefined
来源
Digital Society | 2023年 / 2卷 / 1期
关键词
AI ethics; Privacy; Obfuscation; Adversarial machine learning; Data poisoning;
D O I
10.1007/s44206-023-00039-1
中图分类号
学科分类号
摘要
This paper investigates the ethical implications of using adversarial machine learning for the purpose of obfuscation. We suggest that adversarial attacks can be justified by privacy considerations but that they can also cause collateral damage. To clarify the matter, we employ two use cases—facial recognition and medical machine learning—to evaluate the collateral damage counterarguments to privacy-induced adversarial attacks. We conclude that obfuscation by data poisoning can be justified in facial recognition but not in the medical case. We motivate our conclusion by employing psychological arguments about change, privacy considerations, and purpose limitations on machine learning applications.
引用
收藏
相关论文
共 50 条
  • [21] Adversarial Machine Learning - Industry Perspectives
    Kumar, Ram Shankar Siva
    Nystrom, Magnus
    Lambert, John
    Marshall, Andrew
    Goertzel, Mario
    Comissoneru, Andi
    Swann, Matt
    Xia, Sharon
    2020 IEEE SYMPOSIUM ON SECURITY AND PRIVACY WORKSHOPS (SPW 2020), 2020, : 69 - 75
  • [22] Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics
    Ma, Yuxin
    Xie, Tiankai
    Li, Jundong
    Maciejewski, Ross
    IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2020, 26 (01) : 1075 - 1085
  • [23] Data Poisoning in Sequential and Parallel Federated Learning*
    Nuding, Florian
    Mayer, Rudolf
    PROCEEDINGS OF THE 2022 ACM INTERNATIONAL WORKSHOP ON SECURITY AND PRIVACY ANALYTICS (IWSPA '22), 2022, : 24 - 34
  • [24] Poisoning Attacks on Fair Machine Learning
    Minh-Hao Van
    Du, Wei
    Wu, Xintao
    Lu, Aidong
    DATABASE SYSTEMS FOR ADVANCED APPLICATIONS, DASFAA 2022, PT I, 2022, : 370 - 386
  • [25] An Algorithm for Generating Invisible Data Poisoning Using Adversarial Noise That Breaks Image Classification Deep Learning
    Chan-Hon-Tong, Adrien
    MACHINE LEARNING AND KNOWLEDGE EXTRACTION, 2019, 1 (01): : 192 - 204
  • [26] Evaluating data distribution and drift vulnerabilities of machine learning algorithms in secure and adversarial environments
    Nelson, Kevin
    Corbin, George
    Blowers, Misty
    MACHINE INTELLIGENCE AND BIO-INSPIRED COMPUTATION: THEORY AND APPLICATIONS VIII, 2014, 9119
  • [27] Poisoning Attack in Federated Learning using Generative Adversarial Nets
    Zhang, Jiale
    Chen, Junjun
    Wu, Di
    Chen, Bing
    Yu, Shui
    2019 18TH IEEE INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS/13TH IEEE INTERNATIONAL CONFERENCE ON BIG DATA SCIENCE AND ENGINEERING (TRUSTCOM/BIGDATASE 2019), 2019, : 374 - 380
  • [28] Closeness and uncertainty aware adversarial examples detection in adversarial machine learning
    Tuna, Omer Faruk
    Catak, Ferhat Ozgur
    Eskil, M. Taner
    COMPUTERS & ELECTRICAL ENGINEERING, 2022, 101
  • [29] Defense strategies for Adversarial Machine Learning: A survey
    Bountakas, Panagiotis
    Zarras, Apostolis
    Lekidis, Alexios
    Xenakis, Christos
    COMPUTER SCIENCE REVIEW, 2023, 49
  • [30] Adversarial machine learning: a review of methods, tools, and critical industry sectors
    Sotiris Pelekis
    Thanos Koutroubas
    Afroditi Blika
    Anastasis Berdelis
    Evangelos Karakolis
    Christos Ntanos
    Evangelos Spiliotis
    Dimitris Askounis
    Artificial Intelligence Review, 58 (8)