Training deep quantum neural networks

被引:373
|
作者
Beer, Kerstin [1 ]
Bondarenko, Dmytro [1 ]
Farrelly, Terry [1 ,2 ]
Osborne, Tobias J. [1 ]
Salzmann, Robert [1 ,3 ]
Scheiermann, Daniel [1 ]
Wolf, Ramona [1 ]
机构
[1] Leibniz Univ Hannover, Inst Theoret Phys, Appelstr 2, D-30167 Hannover, Germany
[2] Univ Queensland, Sch Math & Phys, ARC Ctr Engn Quantum Syst, Brisbane, Qld 4072, Australia
[3] Univ Cambridge, Dept Appl Math & Theoret Phys, Cambridge CB3 0WA, England
基金
澳大利亚研究理事会;
关键词
PERCEPTRON;
D O I
10.1038/s41467-020-14454-2
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Neural networks enjoy widespread success in both research and industry and, with the advent of quantum technology, it is a crucial challenge to design quantum neural networks for fully quantum learning tasks. Here we propose a truly quantum analogue of classical neurons, which form quantum feedforward neural networks capable of universal quantum computation. We describe the efficient training of these networks using the fidelity as a cost function, providing both classical and efficient quantum implementations. Our method allows for fast optimisation with reduced memory requirements: the number of qudits required scales with only the width, allowing deep-network optimisation. We benchmark our proposal for the quantum task of learning an unknown unitary and find remarkable generalisation behaviour and a striking robustness to noisy training data.
引用
收藏
页数:6
相关论文
共 50 条
  • [31] A fast adaptive algorithm for training deep neural networks
    Gui, Yangting
    Li, Dequan
    Fang, Runyue
    APPLIED INTELLIGENCE, 2023, 53 (04) : 4099 - 4108
  • [32] Noisy training for deep neural networks in speech recognition
    Shi Yin
    Chao Liu
    Zhiyong Zhang
    Yiye Lin
    Dong Wang
    Javier Tejedor
    Thomas Fang Zheng
    Yinguo Li
    EURASIP Journal on Audio, Speech, and Music Processing, 2015
  • [33] Efficient Incremental Training for Deep Convolutional Neural Networks
    Tao, Yudong
    Tu, Yuexuan
    Shyu, Mei-Ling
    2019 2ND IEEE CONFERENCE ON MULTIMEDIA INFORMATION PROCESSING AND RETRIEVAL (MIPR 2019), 2019, : 286 - 291
  • [34] An Efficient Optimization Technique for Training Deep Neural Networks
    Mehmood, Faisal
    Ahmad, Shabir
    Whangbo, Taeg Keun
    MATHEMATICS, 2023, 11 (06)
  • [35] Fluorescence microscopy datasets for training deep neural networks
    Hagen, Guy M.
    Bendesky, Justin
    Machado, Rosa
    Tram-Anh Nguyen
    Kumar, Tanmay
    Ventura, Jonathan
    GIGASCIENCE, 2021, 10 (05):
  • [36] A survey on parallel training algorithms for deep neural networks
    Yook, Dongsuk
    Lee, Hyowon
    Yoo, In-Chul
    JOURNAL OF THE ACOUSTICAL SOCIETY OF KOREA, 2020, 39 (06): : 505 - 514
  • [37] A simple theory for training response of deep neural networks
    Nakazato, Kenichi
    PHYSICA SCRIPTA, 2024, 99 (06)
  • [38] Training Deep Neural Networks in Situ with Neuromorphic Photonics
    Filipovich, Matthew J.
    Guo, Zhimu
    Marquez, Bicky A.
    Morison, Hugh D.
    Shastri, Bhavin J.
    2020 IEEE PHOTONICS CONFERENCE (IPC), 2020,
  • [39] Stability for the training of deep neural networks and other classifiers
    Berlyand, Leonid
    Jabin, Pierre-Emmanuel
    Safsten, C. Alex
    MATHEMATICAL MODELS & METHODS IN APPLIED SCIENCES, 2021, 31 (11): : 2345 - 2390
  • [40] An Exploration on Temperature Term in Training Deep Neural Networks
    Si, Zhaofeng
    Qi, Honggang
    2019 16TH IEEE INTERNATIONAL CONFERENCE ON ADVANCED VIDEO AND SIGNAL BASED SURVEILLANCE (AVSS), 2019,