Training deep quantum neural networks

被引:373
|
作者
Beer, Kerstin [1 ]
Bondarenko, Dmytro [1 ]
Farrelly, Terry [1 ,2 ]
Osborne, Tobias J. [1 ]
Salzmann, Robert [1 ,3 ]
Scheiermann, Daniel [1 ]
Wolf, Ramona [1 ]
机构
[1] Leibniz Univ Hannover, Inst Theoret Phys, Appelstr 2, D-30167 Hannover, Germany
[2] Univ Queensland, Sch Math & Phys, ARC Ctr Engn Quantum Syst, Brisbane, Qld 4072, Australia
[3] Univ Cambridge, Dept Appl Math & Theoret Phys, Cambridge CB3 0WA, England
基金
澳大利亚研究理事会;
关键词
PERCEPTRON;
D O I
10.1038/s41467-020-14454-2
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Neural networks enjoy widespread success in both research and industry and, with the advent of quantum technology, it is a crucial challenge to design quantum neural networks for fully quantum learning tasks. Here we propose a truly quantum analogue of classical neurons, which form quantum feedforward neural networks capable of universal quantum computation. We describe the efficient training of these networks using the fidelity as a cost function, providing both classical and efficient quantum implementations. Our method allows for fast optimisation with reduced memory requirements: the number of qudits required scales with only the width, allowing deep-network optimisation. We benchmark our proposal for the quantum task of learning an unknown unitary and find remarkable generalisation behaviour and a striking robustness to noisy training data.
引用
收藏
页数:6
相关论文
共 50 条
  • [1] Training deep quantum neural networks
    Kerstin Beer
    Dmytro Bondarenko
    Terry Farrelly
    Tobias J. Osborne
    Robert Salzmann
    Daniel Scheiermann
    Ramona Wolf
    Nature Communications, 11
  • [2] Quantum optimization for training quantum neural networks
    Liao, Yidong
    Hsieh, Min-Hsiu
    Ferrie, Chris
    QUANTUM MACHINE INTELLIGENCE, 2024, 6 (01)
  • [3] MULTILINGUAL TRAINING OF DEEP NEURAL NETWORKS
    Ghoshal, Arnab
    Swietojanski, Pawel
    Renals, Steve
    2013 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2013, : 7319 - 7323
  • [4] NOISY TRAINING FOR DEEP NEURAL NETWORKS
    Meng, Xiangtao
    Liu, Chao
    Zhang, Zhiyong
    Wang, Dong
    2014 IEEE CHINA SUMMIT & INTERNATIONAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (CHINASIP), 2014, : 16 - 20
  • [5] Quantum Contextuality for Training Neural Networks
    ZHANG Junwei
    LI Zhao
    Chinese Journal of Electronics, 2020, 29 (06) : 1178 - 1184
  • [6] Is normalization indispensable for training deep neural networks?
    Shao, Jie
    Hu, Kai
    Wang, Changhu
    Xue, Xiangyang
    Raj, Bhiksha
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [7] On Calibration of Mixup Training for Deep Neural Networks
    Maronas, Juan
    Ramos, Daniel
    Paredes, Roberto
    STRUCTURAL, SYNTACTIC, AND STATISTICAL PATTERN RECOGNITION, S+SSPR 2020, 2021, 12644 : 67 - 76
  • [8] Exploiting Invariance in Training Deep Neural Networks
    Ye, Chengxi
    Zhou, Xiong
    McKinney, Tristan
    Liu, Yanfeng
    Zhou, Qinggang
    Zhdanov, Fedor
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 8849 - 8856
  • [9] Exploring strategies for training deep neural networks
    Larochelle, Hugo
    Bengio, Yoshua
    Louradour, Jérôme
    Lamblin, Pascal
    Journal of Machine Learning Research, 2009, 10 : 1 - 40
  • [10] Training Deep Neural Networks with Gradual Deconvexification
    Lo, Jawes Ting-Ho
    Gui, Yichuan
    Peng, Yun
    2016 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2016, : 1000 - 1007