Experimental Deep Reinforcement Learning for Error-Robust Gate-Set Design on a Superconducting Quantum Computer

被引:67
|
作者
Baum, Yuval [1 ,2 ]
Amico, Mirko [1 ,2 ]
Howell, Sean [1 ,2 ]
Hush, Michael [1 ,2 ]
Liuzzi, Maggie [1 ,2 ]
Mundada, Pranav [1 ,2 ]
Merkh, Thomas [1 ,2 ]
Carvalho, Andre R. R. [1 ,2 ]
Biercuk, Michael J. [1 ,2 ,3 ]
机构
[1] Q CTRL, Sydney, NSW, Australia
[2] Q CTRL, Los Angeles, CA 90013 USA
[3] Univ Sydney, ARC Ctr Engn Quantum Syst, Sydney, NSW, Australia
来源
PRX QUANTUM | 2021年 / 2卷 / 04期
关键词
DECOHERENCE; ALGORITHM;
D O I
10.1103/PRXQuantum.2.040324
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
Quantum computers promise tremendous impact across applications-and have shown great strides in hardware engineering-but remain notoriously error prone. Careful design of low-level controls has been shown to compensate for the processes that induce hardware errors, leveraging techniques from optimal and robust control. However, these techniques rely heavily on the availability of highly accurate and detailed physical models, which generally achieve only sufficient representative fidelity for the most simple operations and generic noise modes. In this work, we use deep reinforcement learning to design a universal set of error-robust quantum logic gates in runtime on a superconducting quantum computer, without requiring knowledge of a specific Hamiltonian model of the system, its controls, or its underlying error processes. We experimentally demonstrate that a fully autonomous deep-reinforcement-learning agent can design single qubit gates up to 3x faster than default DRAG operations without additional leakage error, and exhibiting robustness against calibration drifts over weeks. We then show that ZX (-pi/2) operations implemented using the cross-resonance interaction can outperform hardware default gates by over 2x and equivalently exhibit superior calibration-free performance up to 25 days post optimization. We benchmark the performance of deep-reinforcement-learning-derived gates against other black-box optimization techniques, showing that deep reinforcement learning can achieve comparable or marginally superior performance, even with limited hardware access.
引用
收藏
页数:12
相关论文
共 28 条
  • [1] Error-Robust Quantum Logic Optimization Using a Cloud Quantum Computer Interface
    Carvalho, Andre R. R.
    Ball, Harrison
    Biercuk, Michael J.
    Hush, Michael R.
    Thomsen, Felix
    PHYSICAL REVIEW APPLIED, 2021, 15 (06)
  • [2] Deep reinforcement learning for quantum gate control
    An, Zheng
    Zhou, D. L.
    EPL, 2019, 126 (06)
  • [3] Model-Free Quantum Gate Design and Calibration Using Deep Reinforcement Learning
    Shindi O.
    Yu Q.
    Girdhar P.
    Dong D.
    IEEE. Trans. Artif. Intell., 2024, 1 (346-357): : 346 - 357
  • [4] Quantum reinforcement learning Comparing quantum annealing and gate-based quantum computing with classical deep reinforcement learning
    Neumann, Niels M. P.
    de Heer, Paolo B. U. L.
    Phillipson, Frank
    QUANTUM INFORMATION PROCESSING, 2023, 22 (02)
  • [5] Error-Robust and Label-Efficient Deep Learning for Understanding Tumor Microenvironment From Spatial Transcriptomics
    Leng, Jiake
    Zhang, Yiyan
    Liu, Xiang
    Cui, Yiming
    Zhao, Junhan
    Ge, Yongxin
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (08) : 6785 - 6796
  • [6] Quantum reinforcement learningComparing quantum annealing and gate-based quantum computing with classical deep reinforcement learning
    Niels M. P. Neumann
    Paolo B. U. L. de Heer
    Frank Phillipson
    Quantum Information Processing, 22
  • [7] Deep reinforcement learning with reward design for quantum control
    Yu H.
    Zhao X.
    IEEE Transactions on Artificial Intelligence, 2024, 5 (03): : 1087 - 1101
  • [8] Quantum error correction for the toric code using deep reinforcement learning
    Andreasson, Philip
    Johansson, Joel
    Liljestrand, Simon
    Granath, Mats
    QUANTUM, 2019, 3
  • [9] Deep reinforcement learning for optimal experimental design in biology
    Treloar, Neythen J.
    Braniff, Nathan
    Ingalls, Brian
    Barnes, Chris P.
    PLOS COMPUTATIONAL BIOLOGY, 2022, 18 (11)
  • [10] Optimizing Sequential Experimental Design with Deep Reinforcement Learning
    Blau, Tom
    Bonilla, Edwin V.
    Chades, Iadine
    Dezfouli, Amir
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,