Sample-efficient multi-agent reinforcement learning with masked reconstruction

被引:0
作者
Kim, Jung In [1 ]
Lee, Young Jae [1 ]
Heo, Jongkook [1 ]
Park, Jinhyeok [1 ]
Kim, Jaehoon [1 ]
Lim, Sae Rin [1 ]
Jeong, Jinyong [1 ]
Kim, Seoung Bum [1 ]
机构
[1] Korea Univ, Sch Ind & Management Engn, Seoul, South Korea
来源
PLOS ONE | 2023年 / 18卷 / 09期
关键词
D O I
10.1371/journal.pone.0291545
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Deep reinforcement learning (DRL) is a powerful approach that combines reinforcement learning (RL) and deep learning to address complex decision-making problems in high-dimensional environments. Although DRL has been remarkably successful, its low sample efficiency necessitates extensive training times and large amounts of data to learn optimal policies. These limitations are more pronounced in the context of multi-agent reinforcement learning (MARL). To address these limitations, various studies have been conducted to improve DRL. In this study, we propose an approach that combines a masked reconstruction task with QMIX (M-QMIX). By introducing a masked reconstruction task as an auxiliary task, we aim to achieve enhanced sample efficiency-a fundamental limitation of RL in multi-agent systems. Experiments were conducted using the StarCraft II micromanagement benchmark to validate the effectiveness of the proposed method. We used 11 scenarios comprising five easy, three hard, and three very hard scenarios. We particularly focused on using a limited number of time steps for each scenario to demonstrate the improved sample efficiency. Compared to QMIX, the proposed method is superior in eight of the 11 scenarios. These results provide strong evidence that the proposed method is more sample-efficient than QMIX, demonstrating that it effectively addresses the limitations of DRL in multi-agent systems.
引用
收藏
页数:14
相关论文
共 50 条
  • [21] Efficient Communications in Multi-Agent Reinforcement Learning for Mobile Applications
    Lv, Zefang
    Xiao, Liang
    Du, Yousong
    Zhu, Yunjun
    Han, Shuai
    Liu, Yong-Jin
    [J]. IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2024, 23 (09) : 12440 - 12454
  • [22] Efficient Communications for Multi-Agent Reinforcement Learning in Wireless Networks
    Lv, Zefang
    Du, Yousong
    Chen, Yifan
    Xiao, Liang
    Han, Shuai
    Ji, Xiangyang
    [J]. IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 583 - 588
  • [23] Communication-Efficient and Federated Multi-Agent Reinforcement Learning
    Krouka, Mounssif
    Elgabli, Anis
    Ben Issaid, Chaouki
    Bennis, Mehdi
    [J]. IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2022, 8 (01) : 311 - 320
  • [24] Safe and Sample-Efficient Reinforcement Learning for Clustered Dynamic Environments
    Chen, Hongyi
    Liu, Changliu
    [J]. IEEE CONTROL SYSTEMS LETTERS, 2022, 6 : 1928 - 1933
  • [25] Sample-Efficient Reinforcement Learning with loglog(T) Switching Cost
    Qiao, Dan
    Yin, Ming
    Min, Ming
    Wang, Yu-Xiang
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [26] Sample-Efficient Reinforcement Learning of Partially Observable Markov Games
    Liu, Qinghua
    Szepesvari, Csaba
    Jin, Chi
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [27] Multi-agent DQN with sample-efficient updates for large inter-slice orchestration problems
    Doanis, Pavlos
    Spyropoulos, Thrasyvoulos
    [J]. 2024 INTERNATIONAL CONFERENCE ON COMPUTING, NETWORKING AND COMMUNICATIONS, ICNC, 2024, : 772 - 777
  • [28] MA2CL:Masked Attentive Contrastive Learning for Multi-Agent Reinforcement Learning
    Song, Haolin
    Feng, Mingxiao
    Zhou, Wengang
    Li, Houqiang
    [J]. PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 4226 - 4234
  • [29] Sample-Efficient Deep Reinforcement Learning with Directed Associative Graph
    Dujia Yang
    Xiaowei Qin
    Xiaodong Xu
    Chensheng Li
    Guo Wei
    [J]. 中国通信, 2021, 18 (06) : 100 - 113
  • [30] MAMBPO: Sample-efficient multi-robot reinforcement learning using learned world models
    Willemsen, Daniel
    Coppola, Mario
    de Croon, Guido C. H. E.
    [J]. 2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2021, : 5635 - 5640