Ethics of Adversarial Machine Learning and Data Poisoning

被引:0
作者
Laurynas Adomaitis
Rajvardhan Oak
机构
[1] CEA-Saclay/Larsim,Department of Computer Science
[2] NordVPN S. A,undefined
[3] Microsoft,undefined
[4] University of California Davis,undefined
来源
Digital Society | 2023年 / 2卷 / 1期
关键词
AI ethics; Privacy; Obfuscation; Adversarial machine learning; Data poisoning;
D O I
10.1007/s44206-023-00039-1
中图分类号
学科分类号
摘要
This paper investigates the ethical implications of using adversarial machine learning for the purpose of obfuscation. We suggest that adversarial attacks can be justified by privacy considerations but that they can also cause collateral damage. To clarify the matter, we employ two use cases—facial recognition and medical machine learning—to evaluate the collateral damage counterarguments to privacy-induced adversarial attacks. We conclude that obfuscation by data poisoning can be justified in facial recognition but not in the medical case. We motivate our conclusion by employing psychological arguments about change, privacy considerations, and purpose limitations on machine learning applications.
引用
收藏
相关论文
共 50 条
  • [31] Machine Learning Integrity and Privacy in Adversarial Environments
    Oprea, Alina
    PROCEEDINGS OF THE 26TH ACM SYMPOSIUM ON ACCESS CONTROL MODELS AND TECHNOLOGIES, SACMAT 2021, 2021, : 1 - 2
  • [32] Adversarial Machine Learning in Smart Energy Systems
    Bor, Martin C.
    Marnerides, Angelos K.
    Molineux, Andy
    Wattam, Steve
    Roedig, Utz
    E-ENERGY'19: PROCEEDINGS OF THE 10TH ACM INTERNATIONAL CONFERENCE ON FUTURE ENERGY SYSTEMS, 2019, : 413 - 415
  • [33] Adversarial Machine Learning: A Survey on the Influence Axis
    Alzahrani, Shahad
    Almalki, Taghreed
    Alsuwat, Hatim
    Alsuwat, Emad
    INTERNATIONAL JOURNAL OF COMPUTER SCIENCE AND NETWORK SECURITY, 2022, 22 (05): : 193 - 203
  • [34] Data Poisoning Attack by Label Flipping on SplitFed Learning
    Gajbhiye, Saurabh
    Singh, Priyanka
    Gupta, Shaifu
    RECENT TRENDS IN IMAGE PROCESSING AND PATTERN RECOGNITION, RTIP2R 2022, 2023, 1704 : 391 - 405
  • [35] Data Poisoning Attacks Against Federated Learning Systems
    Tolpegin, Vale
    Truex, Stacey
    Gursoy, Mehmet Emre
    Liu, Ling
    COMPUTER SECURITY - ESORICS 2020, PT I, 2020, 12308 : 480 - 501
  • [36] A Countermeasure Method Using Poisonous Data Against Poisoning Attacks on IoT Machine Learning
    Chiba, Tomoki
    Sei, Yuichi
    Tahara, Yasuyuki
    Ohsuga, Akihiko
    INTERNATIONAL JOURNAL OF SEMANTIC COMPUTING, 2021, 15 (02) : 215 - 240
  • [37] The Vulnerability of UAVs: An Adversarial Machine Learning Perspective
    Doyle, Michael
    Harguess, Joshua
    Manville, Keith
    Rodriguez, Mikel
    GEOSPATIAL INFORMATICS XI, 2021, 11733
  • [38] Can machine learning model with static features be fooled: an adversarial machine learning approach
    Rahim Taheri
    Reza Javidan
    Mohammad Shojafar
    P. Vinod
    Mauro Conti
    Cluster Computing, 2020, 23 : 3233 - 3253
  • [39] Can machine learning model with static features be fooled: an adversarial machine learning approach
    Taheri, Rahim
    Javidan, Reza
    Shojafar, Mohammad
    Vinod, P.
    Conti, Mauro
    CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2020, 23 (04): : 3233 - 3253
  • [40] Poisoning attacks on machine learning models in cyber systems and mitigation strategies
    Izmailov, Rauf
    Venkatesan, Sridhar
    Reddy, Achyut
    Chadha, Ritu
    De Lucia, Michael
    Oprea, Alina
    DISRUPTIVE TECHNOLOGIES IN INFORMATION SCIENCES VI, 2022, 12117