A General Approach to Dropout in Quantum Neural Networks

被引:5
作者
Scala, Francesco [1 ]
Ceschini, Andrea [2 ]
Panella, Massimo [2 ]
Gerace, Dario [1 ]
机构
[1] Univ Pavia, Dipartimento Fis, Pavia, Italy
[2] Univ Roma La Sapienza, Dipartimento Ingn Informaz Elettron & Telecomunica, Rome, Italy
关键词
dropout; overfitting; overparametrization; quantum neural networks;
D O I
10.1002/qute.202300220
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
In classical machine learning (ML), "overfitting" is the phenomenon occurring when a given model learns the training data excessively well, and it thus performs poorly on unseen data. A commonly employed technique in ML is the so called "dropout," which prevents computational units from becoming too specialized, hence reducing the risk of overfitting. With the advent of quantum neural networks (QNNs) as learning models, overfitting might soon become an issue, owing to the increasing depth of quantum circuits as well as multiple embedding of classical features, which are employed to give the computational nonlinearity. Here, a generalized approach is presented to apply the dropout technique in QNN models, defining and analyzing different quantum dropout strategies to avoid overfitting and achieve a high level of generalization. This study allows to envision the power of quantum dropout in enabling generalization, providing useful guidelines on determining the maximal dropout probability for a given model, based on overparametrization theory. It also highlights how quantum dropout does not impact the features of the QNN models, such as expressibility and entanglement. All these conclusions are supported by extensive numerical simulations and may pave the way to efficiently employing deep quantum machine learning (QML) models based on state-of-the-art QNNs. Randomly dropping artificial neurons and all their connections in the training phase reduces overfitting issues in classical neural networks, thus improving performances on previously unseen data. The authors introduce different dropout strategies applied to quantum neural networks, learning models based on parametrized quantum circuits. Quantum dropout strategies might help reducing overfitting without impacting the expressibility and entanglement of these models.image
引用
收藏
页数:18
相关论文
共 56 条
  • [41] Quantum Computing in the NISQ era and beyond
    Preskill, John
    [J]. QUANTUM, 2018, 2
  • [42] Quantum machine learning for chemistry and physics
    Sajjan, Manas
    Li, Junxu
    Selvarajan, Raja
    Sureshbabu, Shree Hari
    Kale, Sumit Suresh
    Gupta, Rishabh
    Singh, Vinit
    Kais, Sabre
    [J]. CHEMICAL SOCIETY REVIEWS, 2022, 51 (15) : 6475 - 6573
  • [43] Quantum variational learning for entanglement witnessing
    Scala, Francesco
    Mangini, Stefano
    Macchiavello, Chiara
    Bajoni, Daniele
    Gerace, Dario
    [J]. 2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [44] Schatzki L, 2022, Arxiv, DOI arXiv:2209.07607
  • [45] Effect of data encoding on the expressive power of variational quantum-machine-learning models
    Schuld, Maria
    Sweke, Ryan
    Meyer, Johannes Jakob
    [J]. PHYSICAL REVIEW A, 2021, 103 (03)
  • [46] Circuit-centric quantum classifiers
    Schuld, Maria
    Bocharov, Alex
    Svore, Krysta M.
    Wiebe, Nathan
    [J]. PHYSICAL REVIEW A, 2020, 101 (03)
  • [47] Expressibility and Entangling Capability of Parameterized Quantum Circuits for Hybrid Quantum-Classical Algorithms
    Sim, Sukin
    Johnson, Peter D.
    Aspuru-Guzik, Alan
    [J]. ADVANCED QUANTUM TECHNOLOGIES, 2019, 2 (12)
  • [48] Singh A, 2016, PROCEEDINGS OF THE 10TH INDIACOM - 2016 3RD INTERNATIONAL CONFERENCE ON COMPUTING FOR SUSTAINABLE GLOBAL DEVELOPMENT, P1310
  • [49] Srivastava N, 2014, J MACH LEARN RES, V15, P1929
  • [50] Quantum Natural Gradient
    Stokes, James
    Izaac, Josh
    Killoran, Nathan
    Carleo, Giuseppe
    [J]. QUANTUM, 2020, 4