Characterizing and Optimizing the End-to-End Performance of Multi-Agent Reinforcement Learning Systems

被引:0
作者
Gogineni, Kailash [1 ]
Mei, Yongsheng [1 ]
Gogineni, Karthikeya
Wei, Peng [1 ]
Lan, Tian [1 ]
Venkataramani, Guru [1 ]
机构
[1] George Washington Univ, Washington, DC 20052 USA
来源
2024 IEEE INTERNATIONAL SYMPOSIUM ON WORKLOAD CHARACTERIZATION, IISWC 2024 | 2024年
基金
美国国家科学基金会;
关键词
Multi-Agent Systems; Performance Analysis; Reinforcement Learning; Performance Optimization;
D O I
10.1109/IISWC63097.2024.00028
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Multi-Agent Reinforcement Learning Systems (MARL) can unlock the potential to model and control multiple autonomous decision-making agents simultaneously. During online training, MARL algorithms involve performance-intensive computations, such as exploration and exploitation phases originating from a large observation-action space and a huge number of training steps. Understanding and mitigating the MARL performance limiters is key to their practical adoption. In this paper, we first present a detailed workload characterization of MARL workloads under different multi-agent settings. Our experimental analysis identifies a critical performance bottleneck that affects scaling within the mini-batch sampling on transition data. To mitigate this issue, we explore a series of optimization strategies. First, we investigate cache locality-aware sampling that prioritizes intra-agent neighbor transitions over other randomly picked transition data samples within the baseline MARL algorithms. Next, we explore importance sampling techniques that preserve the learning performance/distribution and capture the neighbors of important transitions. Finally, we design an additional algorithmic optimization that reorganizes the transition data layout to improve the cache locality between different agents during the mini-batch sampling process. We evaluate our optimizations using popular MARL workloads on multi-agent particle games. Our work highlights several opportunities for enhancing the performance of multi-agent systems, with end-to-end training time improvements ranging from 8.2% (3 agents) to 20.5% (24 agents) compared to the baseline MADDPG, affirming the usefulness of deeply understanding MARL performance bottlenecks and mitigating them effectively.
引用
收藏
页码:224 / 235
页数:12
相关论文
共 50 条
  • [21] Generalized learning automata for multi-agent reinforcement learning
    De Hauwere, Yann-Michael
    Vrancx, Peter
    Nowe, Ann
    AI COMMUNICATIONS, 2010, 23 (04) : 311 - 324
  • [22] End-to-end multimodal image registration via reinforcement learning
    Hu, Jing
    Luo, Ziwei
    Wang, Xin
    Sun, Shanhui
    Yin, Youbing
    Cao, Kunlin
    Song, Qi
    Lyu, Siwei
    Wu, Xi
    MEDICAL IMAGE ANALYSIS, 2021, 68
  • [23] An analysis of multi-agent reinforcement learning for decentralized inventory control systems
    Mousa, Marwan
    van de Berg, Damien
    Kotecha, Niki
    Chanona, Ehecatl Antonio del Rio
    Mowbray, Max
    COMPUTERS & CHEMICAL ENGINEERING, 2024, 188
  • [24] Multi-agent deep reinforcement learning: a survey
    Gronauer, Sven
    Diepold, Klaus
    ARTIFICIAL INTELLIGENCE REVIEW, 2022, 55 (02) : 895 - 943
  • [25] Multi-agent deep reinforcement learning: a survey
    Sven Gronauer
    Klaus Diepold
    Artificial Intelligence Review, 2022, 55 : 895 - 943
  • [26] Multi-Agent Reinforcement Learning for Highway Platooning
    Kolat, Mate
    Becsi, Tamas
    ELECTRONICS, 2023, 12 (24)
  • [27] Deep Reinforcement Learning-Driven Optimization of End-to-End Key Provision in QKD Systems
    Seok, Yeongjun
    Kim, Ju-Bong
    Han, Youn-Hee
    Lim, Hyun-Kyo
    Lee, Chankyun
    Lee, Wonhyuk
    JOURNAL OF NETWORK AND SYSTEMS MANAGEMENT, 2025, 33 (02)
  • [28] Online Reinforcement Learning in Multi-Agent Systems for Distributed Energy Systems
    Menon, Bharat R.
    Menon, Sangeetha B.
    Srinivasan, Dipti
    Jain, Lakhmi
    2014 IEEE INNOVATIVE SMART GRID TECHNOLOGIES - ASIA (ISGT ASIA), 2014, : 791 - 796
  • [29] Deep reinforcement learning framework for end-to-end semiconductor process control
    Hirtz T.
    Tian H.
    Shahzad S.
    Wu F.
    Yang Y.
    Ren T.-L.
    Neural Computing and Applications, 2024, 36 (20) : 12443 - 12460
  • [30] Reinforcement Learning Based VNF Scheduling with End-to-End Delay Guarantee
    Li, Junling
    Shi, Weisen
    Zhang, Ning
    Shen, Xuemin Sherman
    2019 IEEE/CIC INTERNATIONAL CONFERENCE ON COMMUNICATIONS IN CHINA (ICCC), 2019,