Ethics of Adversarial Machine Learning and Data Poisoning

被引:0
作者
Laurynas Adomaitis
Rajvardhan Oak
机构
[1] CEA-Saclay/Larsim,Department of Computer Science
[2] NordVPN S. A,undefined
[3] Microsoft,undefined
[4] University of California Davis,undefined
来源
Digital Society | 2023年 / 2卷 / 1期
关键词
AI ethics; Privacy; Obfuscation; Adversarial machine learning; Data poisoning;
D O I
10.1007/s44206-023-00039-1
中图分类号
学科分类号
摘要
This paper investigates the ethical implications of using adversarial machine learning for the purpose of obfuscation. We suggest that adversarial attacks can be justified by privacy considerations but that they can also cause collateral damage. To clarify the matter, we employ two use cases—facial recognition and medical machine learning—to evaluate the collateral damage counterarguments to privacy-induced adversarial attacks. We conclude that obfuscation by data poisoning can be justified in facial recognition but not in the medical case. We motivate our conclusion by employing psychological arguments about change, privacy considerations, and purpose limitations on machine learning applications.
引用
收藏
相关论文
共 50 条
  • [41] Adversarial poisoning attacks on reinforcement learning-driven energy pricing
    Gunn, Sam
    Jang, Doseok
    Paradise, Orr
    Spangher, Lucas
    Spanos, Costas J.
    PROCEEDINGS OF THE 2022 THE 9TH ACM INTERNATIONAL CONFERENCE ON SYSTEMS FOR ENERGY-EFFICIENT BUILDINGS, CITIES, AND TRANSPORTATION, BUILDSYS 2022, 2022, : 262 - 265
  • [42] Adversarial Deep Learning for Over-the-Air Spectrum Poisoning Attacks
    Sagduyu, Yalin E.
    Shi, Yi
    Erpek, Tugba
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2021, 20 (02) : 306 - 319
  • [43] Avoiding Occupancy Detection From Smart Meter Using Adversarial Machine Learning
    Yilmaz, Ibrahim
    Siraj, Ambareen
    IEEE ACCESS, 2021, 9 : 35411 - 35430
  • [44] Poisoning Attacks and Data Sanitization Mitigations for Machine Learning Models in Network Intrusion Detection Systems
    Venkatesan, Sridhar
    Sikka, Harshvardhan
    Izmailov, Rauf
    Chadha, Ritu
    Oprea, Alina
    de Lucia, Michael J.
    2021 IEEE MILITARY COMMUNICATIONS CONFERENCE (MILCOM 2021), 2021,
  • [45] Data Poisoning Attacks on Crowdsourcing Learning
    Chen, Pengpeng
    Sun, Hailong
    Chen, Zhijun
    WEB AND BIG DATA, APWEB-WAIM 2021, PT I, 2021, 12858 : 164 - 179
  • [46] Beyond data poisoning in federated learning
    Kasyap, Harsh
    Tripathy, Somanath
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 235
  • [47] A Moving Target Defense against Adversarial Machine Learning
    Roy, Abhishek
    Chhabra, Anshuman
    Kamhoua, Charles A.
    Mohapatra, Prasant
    SEC'19: PROCEEDINGS OF THE 4TH ACM/IEEE SYMPOSIUM ON EDGE COMPUTING, 2019, : 383 - 388
  • [48] Data Poisoning Detection in Federated Learning
    Khuu, Denise-Phi
    Sober, Michael
    Kaaser, Dominik
    Fischer, Mathias
    Schulte, Stefan
    39TH ANNUAL ACM SYMPOSIUM ON APPLIED COMPUTING, SAC 2024, 2024, : 1549 - 1558
  • [49] Adversarial Machine Learning Attacks in Internet of Things Systems
    Kone, Rachida
    Toutsop, Otily
    Thierry, Ketchiozo Wandji
    Kornegay, Kevin
    Falaye, Joy
    2022 IEEE APPLIED IMAGERY PATTERN RECOGNITION WORKSHOP, AIPR, 2022,
  • [50] Quantum Adversarial Machine Learning: Status, Challenges and Perspectives
    Edwards, DeMarcus
    Rawat, Danda B.
    2020 SECOND IEEE INTERNATIONAL CONFERENCE ON TRUST, PRIVACY AND SECURITY IN INTELLIGENT SYSTEMS AND APPLICATIONS (TPS-ISA 2020), 2020, : 128 - 133