Quantum neural networks under depolarization noise: exploring white-box attacks and defenses

被引:0
|
作者
Winderl, David [1 ]
Franco, Nicola [1 ]
Lorenz, Jeanette Miriam [1 ]
机构
[1] Fraunhofer Inst Cognit Syst IKS, Hansastr 32, D-80686 Munich, Germany
关键词
Quantum machine learning; Quantum computing; Adversarial robustness;
D O I
10.1007/s42484-024-00208-6
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Leveraging the unique properties of quantum mechanics, quantum machine learning (QML) promises computational breakthroughs and enriched perspectives where traditional systems reach their boundaries. However, similarly to classical machine learning, QML is not immune to adversarial attacks. Quantum adversarial machine learning has become instrumental in highlighting the weak points of QML models when faced with adversarial crafted feature vectors. Diving deep into this domain, our exploration shines a light on the interplay between depolarization noise and adversarial robustness. While previous results enhanced robustness from adversarial threats through depolarization noise, our findings paint a different picture. Interestingly, adding depolarization noise discontinued the effect of providing further robustness for a multi-class classification scenario. Consolidating our findings, we conducted experiments with a multi-class classifier adversarially trained on gate-based quantum simulators, further elucidating this unexpected behavior.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] Beating White-Box Defenses with Black-Box Attacks
    Kumova, Vera
    Pilat, Martin
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [2] Robustness of Bayesian Neural Networks to White-Box Adversarial Attacks
    Uchendu, Adaku
    Campoy, Daniel
    Menart, Christopher
    Hildenbrandt, Alexandra
    2021 IEEE FOURTH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND KNOWLEDGE ENGINEERING (AIKE 2021), 2021, : 72 - 80
  • [3] Impact of White-Box Adversarial Attacks on Convolutional Neural Networks
    Podder, Rakesh
    Ghosh, Sudipto
    2024 International Conference on Emerging Trends in Networks and Computer Communications, ETNCC 2024 - Proceedings, 2024, : 41 - 49
  • [4] Black-Box Attacks on Graph Neural Networks via White-Box Methods With Performance Guarantees
    Yang, Jielong
    Ding, Rui
    Chen, Jianyu
    Zhong, Xionghu
    Zhao, Huarong
    Xie, Linbo
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (10): : 18193 - 18204
  • [5] IWA: Integrated gradient-based white-box attacks for fooling deep neural networks
    Wang, Yixiang
    Liu, Jiqiang
    Chang, Xiaolin
    Misic, Jelena
    Misic, Vojislav B.
    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2022, 37 (07) : 4253 - 4276
  • [6] A Robustness-Assured White-Box Watermark in Neural Networks
    Lv, Peizhuo
    Li, Pan
    Zhang, Shengzhi
    Chen, Kai
    Liang, Ruigang
    Ma, Hualong
    Zhao, Yue
    Li, Yingjiu
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2023, 20 (06) : 5214 - 5229
  • [7] Gradient Correction for White-Box Adversarial Attacks
    Liu, Hongying
    Ge, Zhijin
    Zhou, Zhenyu
    Shang, Fanhua
    Liu, Yuanyuan
    Jiao, Licheng
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 35 (12) : 1 - 12
  • [8] Two Attacks on a White-Box AES Implementation
    Lepoint, Tancrede
    Rivain, Matthieu
    De Mulder, Yoni
    Roelse, Peter
    Preneel, Bart
    SELECTED AREAS IN CRYPTOGRAPHY - SAC 2013, 2014, 8282 : 265 - 285
  • [9] A White-Box Testing for Deep Neural Networks Based on Neuron Coverage
    Yu, Jing
    Duan, Shukai
    Ye, Xiaojun
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (11) : 9185 - 9197
  • [10] LPN-based Attacks in the White-box Setting
    Charlès A.
    Udovenko A.
    IACR Transactions on Cryptographic Hardware and Embedded Systems, 2023, 2023 (04): : 318 - 343