Curriculum-Based Deep Reinforcement Learning for Quantum Control

被引:15
|
作者
Ma, Hailan [1 ]
Dong, Daoyi [1 ,2 ]
Ding, Steven X. [2 ]
Chen, Chunlin [3 ]
机构
[1] Univ New South Wales, Sch Engn & Informat Technol, Canberra, ACT 2600, Australia
[2] Univ Duisburg Essen, Inst Automat Control & Complex Syst AKS, D-47057 Duisburg, Germany
[3] Nanjing Univ, Sch Management & Engn, Dept Control & Syst Engn, Nanjing 210093, Peoples R China
基金
澳大利亚研究理事会; 中国国家自然科学基金;
关键词
Task analysis; Quantum system; Quantum computing; Process control; Sequential analysis; Quantum state; Quantum entanglement; Curriculum learning; deep reinforcement learning (DRL); quantum control;
D O I
10.1109/TNNLS.2022.3153502
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep reinforcement learning (DRL) has been recognized as an efficient technique to design optimal strategies for different complex systems without prior knowledge of the control landscape. To achieve a fast and precise control for quantum systems, we propose a novel DRL approach by constructing a curriculum consisting of a set of intermediate tasks defined by fidelity thresholds, where the tasks among a curriculum can be statically determined before the learning process or dynamically generated during the learning process. By transferring knowledge between two successive tasks and sequencing tasks according to their difficulties, the proposed curriculum-based DRL (CDRL) method enables the agent to focus on easy tasks in the early stage, then move onto difficult tasks, and eventually approaches the final task. Numerical comparison with the traditional methods [gradient method (GD), genetic algorithm (GA), and several other DRL methods] demonstrates that CDRL exhibits improved control performance for quantum systems and also provides an efficient way to identify optimal strategies with few control pulses.
引用
收藏
页码:8852 / 8865
页数:14
相关论文
共 50 条
  • [1] Cooperative Object Transportation Using Curriculum-Based Deep Reinforcement Learning
    Eoh, Gyuho
    Park, Tae-Hyoung
    SENSORS, 2021, 21 (14)
  • [2] Acquiring musculoskeletal skills with curriculum-based reinforcement learning
    Chiappa, Alberto Silvio
    Tano, Pablo
    Patel, Nisheet
    Ingster, Abigail
    Pouget, Alexandre
    Mathis, Alexander
    NEURON, 2024, 112 (23)
  • [3] Pedestrian Simulation with Reinforcement Learning: A Curriculum-Based Approach
    Vizzari, Giuseppe
    Cecconello, Thomas
    FUTURE INTERNET, 2023, 15 (01):
  • [4] UAV Navigation in 3D Urban Environments with Curriculum-based Deep Reinforcement Learning
    de Carvalho, Kevin Braathen
    de Oliveira, Iure Rosa L.
    Brandao, Alexandre S.
    2023 INTERNATIONAL CONFERENCE ON UNMANNED AIRCRAFT SYSTEMS, ICUAS, 2023, : 1249 - 1255
  • [5] A Critical Period for Robust Curriculum-Based Deep Reinforcement Learning of Sequential Action in a Robot Arm
    de Kleijn, Roy
    Sen, Deniz
    Kachergis, George
    TOPICS IN COGNITIVE SCIENCE, 2022, 14 (02) : 311 - 326
  • [6] Curriculum-Based Asymmetric Multi-Task Reinforcement Learning
    Huang, Hanchi
    Ye, Deheng
    Shen, Li
    Liu, Wei
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (06) : 7258 - 7269
  • [7] Curriculum-Based Reinforcement Learning for Distribution System Critical Load Restoration
    Zhang, Xiangyu
    Eseye, Abinet Tesfaye
    Knueven, Bernard
    Liu, Weijia
    Reynolds, Matthew
    Jones, Wesley
    IEEE TRANSACTIONS ON POWER SYSTEMS, 2023, 38 (05) : 4418 - 4431
  • [8] Curriculum-based reinforcement learning for path tracking in an underactuated nonholonomic system
    Chivkula, Prashanth
    Rodwell, Colin
    Tallapragada, Phanindra
    IFAC PAPERSONLINE, 2022, 55 (37): : 339 - 344
  • [9] Intelligent Game Strategies in Target-Missile-Defender Engagement Using Curriculum-Based Deep Reinforcement Learning
    Gong, Xiaopeng
    Chen, Wanchun
    Chen, Zhongyuan
    AEROSPACE, 2023, 10 (02)
  • [10] Event-Based Deep Reinforcement Learning for Quantum Control
    Yu, Haixu
    Zhao, Xudong
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2024, 8 (01): : 548 - 562