Transferability of Quantum Adversarial Machine Learning

被引:0
|
作者
Li, Vincent [1 ,2 ]
Wooldridge, Tyler [1 ]
Wang, Xiaodi [1 ]
机构
[1] Western Connecticut State Univ, 181 White St, Danbury, CT 06810 USA
[2] Horace Mann Sch, 231 W 246 St, Bronx, NY 10471 USA
关键词
Quantum adversarial machine learning; Fast gradient sign method; Transfer attack; Quantum neural network; Classical neural network; Black box attack;
D O I
10.1007/978-981-19-1610-6_71
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Quantum adversarial machine learning lies at the intersection of quantum computing and adversarial machine learning. As the attainment of quantum supremacy demonstrates, quantum computers have already outpaced classical computers in certain domains (Arute et al. in Nature 574:505-510, 2019 [3]). The study of quantum computation is becoming increasingly relevant in today's world. A field in which quantum computing may be applied is adversarial machine learning. A step toward better understanding quantum computing applied to adversarial machine learning has been taken recently by Lu et al. (Phys Rev Res 2:1-18, 2020 [13]), who have shown that gradient-based adversarial attacks can be transferred from classical to quantum neural networks. Inspired by Lu et al. (Phys Rev Res 2:1-18, 2020 [13]), we investigate the existence of the transferability of adversarial examples between different neural networks and the implications of that transferability. We find that, when the fast gradient sign attacks, as described by Goodfellow et al. (Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 [9]), is applied to a quantum neural network, the adversarially perturbed images produced with that method have transferability between quantum neural networks and from quantum to classical neural networks. In other words, adversarial images produced to deceive a quantum neural network can also deceive other quantum and classical neural networks. The results demonstrate that there exists transferability of adversarial examples in quantum machine learning. This transferability suggests a similarity in the decision boundaries of the different models, which may be an important subject of future study in quantum machine learning theory.
引用
收藏
页码:805 / 814
页数:10
相关论文
共 50 条
  • [21] Ranking the Transferability of Adversarial Examples
    Levy, Moshe
    Amit, Guy
    Elovici, Yuval
    Mirsky, Yisroel
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2024, 15 (05)
  • [22] Exploring Transferability on Adversarial Attacks
    Alvarez, Enrique
    Alvarez, Rafael
    Cazorla, Miguel
    IEEE ACCESS, 2023, 11 : 105545 - 105556
  • [23] On the Adversarial Transferability of ConvMixer Models
    Iijima, Ryota
    Tanaka, Miki
    Echizen, Isao
    Kiya, Hitoshi
    PROCEEDINGS OF 2022 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC), 2022, : 1826 - 1830
  • [24] Boosting the transferability of adversarial CAPTCHAs
    Xu, Zisheng
    Yan, Qiao
    COMPUTERS & SECURITY, 2024, 145
  • [25] Adversarial Machine Learning in the Physical Domain
    Drenkow, Nathan G.
    Fendley, Neil M.
    Lennon, Max
    Burlina, Philippe M.
    Wang, I-Jeng
    Johns Hopkins APL Technical Digest (Applied Physics Laboratory), 2021, 35 (04): : 426 - 429
  • [26] Transferability of Machine Learning Models for Predicting Raman Spectra
    Fang, Mandi
    Tang, Shi
    Fan, Zheyong
    Shi, Yao
    Xu, Nan
    He, Yi
    JOURNAL OF PHYSICAL CHEMISTRY A, 2024, 128 (12): : 2286 - 2294
  • [27] Transferability of Machine Learning Models for Geogenic Contaminated Groundwaters
    Cao, Hailong
    Xie, Xianjun
    Xiao, Ziyi
    Liu, Wenjing
    ENVIRONMENTAL SCIENCE & TECHNOLOGY, 2024, 58 (20) : 8783 - 8791
  • [28] Machine Learning it Adversarial RF Environments
    Roy, Debashri
    Mukherjee, Tathagata
    Chatterjee, Mainak
    IEEE COMMUNICATIONS MAGAZINE, 2019, 57 (05) : 82 - 87
  • [29] Adversarial Machine Learning in the Physical Domain
    Drenkow, Nathan G.
    Fendley, Neil M.
    Lennon, Max
    Burlina, Philippe M.
    Wang, I-Jeng
    JOHNS HOPKINS APL TECHNICAL DIGEST, 2021, 35 (04): : 426 - 429
  • [30] Adversarial attacks on medical machine learning
    Finlayson, Samuel G.
    Bowers, John D.
    Ito, Joichi
    Zittrain, Jonathan L.
    Beam, Andrew L.
    Kohane, Isaac S.
    SCIENCE, 2019, 363 (6433) : 1287 - 1289