Multi-Agent Generative Adversarial Interactive Self-Imitation Learning for AUV Formation Control and Obstacle Avoidance

被引:0
作者
Fang, Zheng [1 ]
Chen, Tianhao [1 ]
Shen, Tian [1 ]
Jiang, Dong [1 ]
Zhang, Zheng [1 ]
Li, Guangliang [1 ]
机构
[1] Ocean Univ China, Coll Elect Engn, Qingdao 266100, India
来源
IEEE ROBOTICS AND AUTOMATION LETTERS | 2025年 / 10卷 / 05期
关键词
Multi-agent reinforcement learning; Imitation learning; AUV; formation control; imitation learning; REINFORCEMENT; ROBOTICS; GAME;
D O I
10.1109/LRA.2025.3550743
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Multiple autonomous underwater vehicles (multi-AUVs) can cooperatively accomplish tasks that a single AUV cannot complete. Recently, multi-agent reinforcement learning has been introduced to control of multi-AUV. However, designing efficient reward functions for various tasks of multi-AUV control is difficult or even impractical. Multi-agent generative adversarial imitation learning (MAGAIL) allows multi-AUV to learn from expert demonstration instead of pre-defined reward functions, but suffers from the deficiency of requiring optimal demonstrations and not surpassing provided expert demonstrations. This letter builds upon the MAGAIL algorithm by proposing multi-agent generative adversarial interactive self-imitation learning (MAGAISIL), which can facilitate AUVs to learn policies by gradually replacing the provided sub-optimal demonstrations with self-generated good trajectories selected by a human trainer. Our experimental results in three multi-AUV formation control and obstacle avoidance tasks on the Gazebo platform with AUV simulator of our lab show that AUVs trained via MAGAISIL can surpass the provided sub-optimal expert demonstrations and reach a performance close to or even better than MAGAIL with optimal demonstrations. Further results indicate that AUVs' policies trained via MAGAISIL can adapt to complex and different tasks as well as MAGAIL learning from optimal demonstrations.
引用
收藏
页码:4356 / 4363
页数:8
相关论文
共 41 条
  • [1] A survey of robot learning from demonstration
    Argall, Brenna D.
    Chernova, Sonia
    Veloso, Manuela
    Browning, Brett
    [J]. ROBOTICS AND AUTONOMOUS SYSTEMS, 2009, 57 (05) : 469 - 483
  • [2] Bloem M, 2014, IEEE DECIS CONTR P, P4911, DOI 10.1109/CDC.2014.7040156
  • [3] A comprehensive survey of multiagent reinforcement learning
    Busoniu, Lucian
    Babuska, Robert
    De Schutter, Bart
    [J]. IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART C-APPLICATIONS AND REVIEWS, 2008, 38 (02): : 156 - 172
  • [4] Busoniu L, 2010, STUD COMPUT INTELL, V310, P183
  • [5] Adaptive low-level control of autonomous underwater vehicles using deep reinforcement learning
    Carlucho, Ignacio
    De Paula, Mariano
    Wang, Sen
    Petillot, Yvan
    Acosta, Gerardo G.
    [J]. ROBOTICS AND AUTONOMOUS SYSTEMS, 2018, 107 : 71 - 86
  • [6] Path planning and obstacle avoidance for AUV: A review
    Cheng, Chunxi
    Sha, Qixin
    He, Bo
    Li, Guangliang
    [J]. OCEAN ENGINEERING, 2021, 235
  • [7] Christiano P. F., 2017, P ADV NEUR INF PROC, P4300
  • [8] Chu K., 2023, P C ROB LEARN WORKSH
  • [9] Dai J., 2024, P 12 INT C LEARN REP
  • [10] Two-step gradient-based reinforcement learning for underwater robotics behavior learning
    El-Fakdi, Andres
    Carreras, Marc
    [J]. ROBOTICS AND AUTONOMOUS SYSTEMS, 2013, 61 (03) : 271 - 282