The Applicability of Reinforcement Learning for the Automatic Generation of State Preparation Circuits

被引:2
作者
Gabor, Thomas [1 ]
Zorn, Maximilian [1 ]
Linnhoff-Popien, Claudia [1 ]
机构
[1] Ludwig Maximilians Univ Munchen, Munich, Germany
来源
PROCEEDINGS OF THE 2022 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE COMPANION, GECCO 2022 | 2022年
关键词
quantum computing; state preparation; circuit design; reinforcement learning; neural network; actor/critic; NEURAL-NETWORKS;
D O I
10.1145/3520304.3534039
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
State preparation is currently the only means to provide input data for quantum algorithm, but finding the shortest possible sequence of gates to prepare a given state is not trivial. We approach this problem using reinforcement learning (RL), first on an agent that is trained to only prepare a single fixed quantum state. Despite the overhead of training a whole network to just produce one single data point, gradient-based backpropagation appears competitive to genetic algorithms in this scenario and single state preparation thus seems a worthwhile task. In a second case we then train a single network to prepare arbitrary quantum states to some degree of success, despite a complete lack of structure in the training data set. In both cases we find that training is severely improved by using QR decomposition to automatically map the agents' outputs to unitary operators to solve the problem of sparse rewards that usually makes this task challenging.
引用
收藏
页码:2196 / 2204
页数:9
相关论文
共 26 条
  • [1] Deep reinforcement learning for quantum gate control
    An, Zheng
    Zhou, D. L.
    [J]. EPL, 2019, 126 (06)
  • [2] Andrychowicz M., 2020, INT C LEARN REPR
  • [3] Training deep quantum neural networks
    Beer, Kerstin
    Bondarenko, Dmytro
    Farrelly, Terry
    Osborne, Tobias J.
    Salzmann, Robert
    Scheiermann, Daniel
    Wolf, Ramona
    [J]. NATURE COMMUNICATIONS, 2020, 11 (01)
  • [4] Quantum machine learning
    Biamonte, Jacob
    Wittek, Peter
    Pancotti, Nicola
    Rebentrost, Patrick
    Wiebe, Nathan
    Lloyd, Seth
    [J]. NATURE, 2017, 549 (7671) : 195 - 202
  • [5] Bravo-Prieto C, 2023, Arxiv, DOI arXiv:1909.05820
  • [6] Brockman G, 2016, Arxiv, DOI [arXiv:1606.01540, DOI 10.48550/ARXIV.1606.01540]
  • [7] Variational quantum algorithms
    Cerezo, M.
    Arrasmith, Andrew
    Babbush, Ryan
    Benjamin, Simon C.
    Endo, Suguru
    Fujii, Keisuke
    McClean, Jarrod R.
    Mitarai, Kosuke
    Yuan, Xiao
    Cincio, Lukasz
    Coles, Patrick J.
    [J]. NATURE REVIEWS PHYSICS, 2021, 3 (09) : 625 - 644
  • [8] Variational Quantum Circuits for Deep Reinforcement Learning
    Chen, Samuel Yen-Chi
    Yang, Chao-Han Huck
    Qi, Jun
    Chen, Pin-Yu
    Ma, Xiaoli
    Goan, Hsi-Sheng
    [J]. IEEE ACCESS, 2020, 8 (08): : 141007 - 141024
  • [9] Quantum reinforcement learning
    Dong, Daoyi
    Chen, Chunlin
    Li, Hanxiong
    Tarn, Tzyh-Jong
    [J]. IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS, 2008, 38 (05): : 1207 - 1220
  • [10] Dunjko V, 2017, IEEE SYS MAN CYBERN, P282, DOI 10.1109/SMC.2017.8122616