Quantum adversarial machine learning

被引:75
|
作者
Lu, Sirui [1 ,2 ]
Duan, Lu-Ming [1 ]
Deng, Dong-Ling [1 ,3 ]
机构
[1] Tsinghua Univ, IIIS, Ctr Quantum Informat, Beijing 100084, Peoples R China
[2] Max Planck Inst Quantum Opt, Hans Kopfermann Str 1, D-85748 Garching, Germany
[3] Shanghai Qi Zhi Inst, 41th Floor,AI Tower,701 Yunjin Rd, Shanghai 200232, Peoples R China
来源
PHYSICAL REVIEW RESEARCH | 2020年 / 2卷 / 03期
关键词
NEURAL-NETWORKS; PHASE-TRANSITIONS; GAME; GO;
D O I
10.1103/PhysRevResearch.2.033212
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
Adversarial machine learning is an emerging field that focuses on studying vulnerabilities of machine learning approaches in adversarial settings and developing techniques accordingly to make learning robust to adversarial manipulations. It plays a vital role in various machine learning applications and recently has attracted tremendous attention across different communities. In this paper, we explore different adversarial scenarios in the context of quantum machine learning. We find that, similar to traditional classifiers based on classical neural networks, quantum learning systems are likewise vulnerable to crafted adversarial examples, independent of whether the input data is classical or quantum. In particular, we find that a quantum classifier that achieves nearly the state-of-the-art accuracy can be conclusively deceived by adversarial examples obtained via adding imperceptible perturbations to the original legitimate samples. This is explicitly demonstrated with quantum adversarial learning in different scenarios, including classifying real-life images (e.g., handwritten digit images in the dataset MNIST), learning phases of matter (such as ferromagnetic/paramagnetic orders and symmetry protected topological phases), and classifying quantum data. Furthermore, we show that based on the information of the adversarial examples at hand, practical defense strategies can be designed to fight against a number of different attacks. Our results uncover the notable vulnerability of quantum machine learning systems to adversarial perturbations, which not only reveals another perspective in bridging machine learning and quantum physics in theory but also provides valuable guidance for practical applications of quantum classifiers based on both near-term and future quantum technologies.
引用
收藏
页数:22
相关论文
共 50 条
  • [41] Machine Learning Integrity and Privacy in Adversarial Environments
    Oprea, Alina
    PROCEEDINGS OF THE 26TH ACM SYMPOSIUM ON ACCESS CONTROL MODELS AND TECHNOLOGIES, SACMAT 2021, 2021, : 1 - 2
  • [42] Adversarial Machine Learning Against Digital Watermarking
    Quiring, Erwin
    Rieck, Konrad
    2018 26TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), 2018, : 519 - 523
  • [43] Safe Machine Learning and Defeating Adversarial Attacks
    Rouhani, Bita Darvish
    Samragh, Mohammad
    Javidi, Tara
    Koushanfar, Farinaz
    IEEE SECURITY & PRIVACY, 2019, 17 (02) : 31 - 38
  • [44] Security Analytics in the Context of Adversarial Machine Learning
    Tygar, Doug
    IWSPA'16: PROCEEDINGS OF THE 2016 ACM INTERNATIONAL WORKSHOP ON SECURITY AND PRIVACY ANALYTICS, 2016, : 49 - 49
  • [45] Detection of adversarial attacks on machine learning systems
    Judah, Matthew
    Sierchio, Jen
    Planer, Michael
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS V, 2023, 12538
  • [46] Defense strategies for Adversarial Machine Learning: A survey
    Bountakas, Panagiotis
    Zarras, Apostolis
    Lekidis, Alexios
    Xenakis, Christos
    COMPUTER SCIENCE REVIEW, 2023, 49
  • [47] Machine learning uncertainties with adversarial neural networks
    Christoph Englert
    Peter Galler
    Philip Harris
    Michael Spannowsky
    The European Physical Journal C, 2019, 79
  • [48] Adversarial Machine Learning: A Survey on the Influence Axis
    Alzahrani, Shahad
    Almalki, Taghreed
    Alsuwat, Hatim
    Alsuwat, Emad
    INTERNATIONAL JOURNAL OF COMPUTER SCIENCE AND NETWORK SECURITY, 2022, 22 (05): : 193 - 203
  • [49] A Metric for Machine Learning Vulnerability to Adversarial Examples
    Bradley, Matthew
    Xu, Shengjie
    IEEE CONFERENCE ON COMPUTER COMMUNICATIONS WORKSHOPS (IEEE INFOCOM WKSHPS 2021), 2021,
  • [50] Ethics of Adversarial Machine Learning and Data Poisoning
    Laurynas Adomaitis
    Rajvardhan Oak
    Digital Society, 2023, 2 (1):