Adaptive optics control with multi-agent model-free reinforcement learning

被引:34
|
作者
Pou, B. [1 ,2 ]
Ferreira, F. [3 ]
Quinones, E. [1 ]
Gratadour, D. [3 ,4 ]
Martin, M. [2 ]
机构
[1] Barcelona Supercotnputing Ctr BSC, C Jordi Girona 29, Barcelona 08034, Spain
[2] Univ Politecn Catalunya UPC, Comp Sci Dept, C Jordi Girona 31, Barcelona 08034, Spain
[3] Univ Paris Diderot, Univ PSL, Sorbonne Paris Cite, Sorbonne Univ,CNRS,Observ Paris,LESIA, 5 Pl Jules Janssen, F-92195 Meudon, France
[4] Australian Natl Univ, Res Sch Astron & Astrophys, Canberra, ACT 2611, Australia
关键词
QUADRATIC GAUSSIAN CONTROL; WAVE-FRONT RECONSTRUCTION;
D O I
10.1364/OE.444099
中图分类号
O43 [光学];
学科分类号
070207 ; 0803 ;
摘要
We present a novel formulation of closed-loop adaptive optics (AO) control as a multi-agent reinforcement learning (MARL) problem in which the controller is able to learn a non-linear policy and does not need a priori information on the dynamics of the atmosphere. We identify the different challenges of applying a reinforcement learning (RL) method to AO and, to solve them, propose the combination of model-free MARL for control with an autoencoder neural network to mitigate the effect of noise. Moreover, we extend current existing methods of error budget analysis to include a RL controller. The experimental results for an 8m telescope equipped with a 40x40 Shack-Hartmann system show a significant increase in performance over the integrator baseline and comparable performance to a model-based predictive approach, a linear quadratic Gaussian controller with perfect knowledge of atmospheric conditions. Finally, the error budget analysis provides evidence that the RL controller is partially compensating for bandwidth error and is helping to mitigate the propagation of aliasing. (C) 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement
引用
收藏
页码:2991 / 3015
页数:25
相关论文
共 50 条
  • [21] Model-Free Adaptive Control Approach Using Integral Reinforcement Learning
    Abouheaf, Mohammed
    Gueaieb, Wail
    2019 IEEE INTERNATIONAL SYMPOSIUM ON ROBOTIC AND SENSORS ENVIRONMENTS (ROSE 2019), 2019, : 84 - 90
  • [22] An Enhanced Model-Free Reinforcement Learning Algorithm to Solve Nash Equilibrium for Multi-Agent Cooperative Game Systems
    Jiang, Yuannan
    Tan, Fuxiao
    IEEE ACCESS, 2020, 8 : 223743 - 223755
  • [23] Model-free algorithm for consensus of discrete-time multi-agent systems using reinforcement learning method
    Long, Mingkang
    An, Qing
    Su, Housheng
    Luo, Hui
    Zhao, Jin
    JOURNAL OF THE FRANKLIN INSTITUTE-ENGINEERING AND APPLIED MATHEMATICS, 2023, 360 (14): : 10564 - 10581
  • [24] Model-free adaptive cluster consensus control for nonlinear multi-agent systems under DoS attack
    Li, Yuhan
    Bu, Xuhui
    Guo, Jinli
    2023 IEEE 12TH DATA DRIVEN CONTROL AND LEARNING SYSTEMS CONFERENCE, DDCLS, 2023, : 857 - 862
  • [25] Model-free adaptive consensus tracking control for unknown nonlinear multi-agent systems with sensor saturation
    Zhao, Huarong
    Peng, Li
    Yu, Hongnian
    INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, 2021, 31 (13) : 6473 - 6491
  • [26] Multi-agent reinforcement learning for character control
    Li, Cheng
    Fussell, Levi
    Komura, Taku
    VISUAL COMPUTER, 2021, 37 (12): : 3115 - 3123
  • [27] Multi-agent reinforcement learning for character control
    Cheng Li
    Levi Fussell
    Taku Komura
    The Visual Computer, 2021, 37 : 3115 - 3123
  • [28] Model-Free Quantum Control with Reinforcement Learning
    Sivak, V. V.
    Eickbusch, A.
    Liu, H.
    Royer, B.
    Tsioutsios, I
    Devoret, M. H.
    PHYSICAL REVIEW X, 2022, 12 (01)
  • [29] Adaptive mean field multi-agent reinforcement learning
    Wang, Xiaoqiang
    Ke, Liangjun
    Zhang, Gewei
    Zhu, Dapeng
    INFORMATION SCIENCES, 2024, 669
  • [30] Adaptive Average Exploration in Multi-Agent Reinforcement Learning
    Hall, Garrett
    Holladay, Ken
    2020 AIAA/IEEE 39TH DIGITAL AVIONICS SYSTEMS CONFERENCE (DASC) PROCEEDINGS, 2020,