QNAD: Quantum Noise Injection for Adversarial Defense in Deep Neural Networks

被引:0
作者
Kundu, Shamik [1 ]
Choudhury, Navnil [1 ]
Das, Sanjay [1 ]
Raha, Arnab [2 ]
Basu, Kanad [1 ]
机构
[1] Univ Texas Dallas, Richardson, TX 75083 USA
[2] Intel Corp, Santa Clara, CA USA
来源
2024 IEEE INTERNATIONAL SYMPOSIUM ON HARDWARE ORIENTED SECURITY AND TRUST, HOST | 2024年
关键词
Quantum Computing; Quantum Machine Learning; NISQ; Adversarial Attack; Deep Neural Networks;
D O I
10.1109/HOST55342.2024.10545406
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Deep learning in quantum computing seeks to leverage the unique properties of quantum systems, such as super-position and entanglement, to enhance the performance of deep learning algorithms. Quantum neural networks (QNNs), which are designed to operate on quantum computers, have the potential to enable faster and more efficient inference execution. However, quantum computers are susceptible to noise, which can rapidly degrade the coherence of quantum states and lead to errors in quantum computations. As a result, deep neural networks (DNNs) that operate on quantum computers may experience degraded classification accuracy during inference. However, in this paper, we demonstrate that this intrinsic quantum noise can actually improve the robustness of DNNs against adversarial input attacks. The noisy behavior of quantum computers can reduce the impact of adversarial attacks, thereby improving the accuracy of the degraded DNNs. To further enhance DNN robustness, we perform am extensive exploration on the prowess of Quantum Noise injection for Adversarial Defense (QNAD), which induces carefully crafted crosstalk in the quantum computer. QNAD preselects a subset of pretrained network weights to be perturbed with injected crosstalk in the qubits, causing them to become entangled due to interactions between neighboring qubits. When evaluated on state-of-the-art network dataset configurations, the proposed QNAD approach provides up to 268% relative improvement in accuracy, against adversarial input attacks compared to conventional DNN implementations.
引用
收藏
页码:1 / 11
页数:11
相关论文
共 25 条
[1]   Quantum machine learning [J].
Biamonte, Jacob ;
Wittek, Peter ;
Pancotti, Nicola ;
Rebentrost, Patrick ;
Wiebe, Nathan ;
Lloyd, Seth .
NATURE, 2017, 549 (7671) :195-202
[2]  
Cheng Heng-Tze., 2016, P 1 WORKSHOP DEEP LE, P7
[3]   Leveraging Noise and Aggressive Quantization of In-Memory Computing for Robust DNN Hardware Against Adversarial Input and Weight Attacks [J].
Cherupally, Sai Kiran ;
Rakin, Adnan Siraj ;
Yin, Shihui ;
Seok, Mingoo ;
Fan, Deliang ;
Seo, Jae-sun .
2021 58TH ACM/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2021, :559-564
[4]   Introduction to quantum noise, measurement, and amplification [J].
Clerk, A. A. ;
Devoret, M. H. ;
Girvin, S. M. ;
Marquardt, Florian ;
Schoelkopf, R. J. .
REVIEWS OF MODERN PHYSICS, 2010, 82 (02) :1155-1208
[5]   Quantum convolutional neural networks [J].
Cong, Iris ;
Choi, Soonwon ;
Lukin, Mikhail D. .
NATURE PHYSICS, 2019, 15 (12) :1273-+
[6]   v Machine learning & artificial intelligence in the quantum domain: a review of recent progress [J].
Dunjko, Vedran ;
Briegel, Hans J. .
REPORTS ON PROGRESS IN PHYSICS, 2018, 81 (07)
[7]  
Gardiner C., 2004, Quantum Noise: A Handbook of Markovian and Non-Markovian Quantum Stochastic Methods with Applications to Quantum Optics
[8]   Parametric Noise Injection: Trainable Randomness to Improve Deep Neural Network Robustness against Adversarial Attack [J].
He, Zhezhi ;
Rakin, Adnan Siraj ;
Fan, Deliang .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :588-597
[9]  
Goodfellow IJ, 2015, Arxiv, DOI arXiv:1412.6572
[10]   Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve Adversarial Robustness [J].
Jeddi, Ahmadreza ;
Shafiee, Mohammad Javad ;
Karg, Michelle ;
Scharfenberger, Christian ;
Wong, Alexander .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :1238-1247