Mutli-agent consensus under communication failure using Actor-Critic Reinforcement Learning

被引:0
|
作者
Kandath, Harikumar [1 ]
Senthilnath, J. [2 ]
Sundaram, Suresh [1 ]
机构
[1] Nanyang Technol Univ, Sch Comp Sci & Engn, Singapore 639798, Singapore
[2] ASTAR, Inst Infocomm Res, Singapore 138632, Singapore
来源
2018 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI) | 2018年
关键词
Actor; Consensus; Critic; Neural network; Reinforcement learning; MULTIAGENT SYSTEMS;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper addresses the problem of achieving multi-agent consensus under sudden total communication failure. The agents are assumed to be moving along the periphery of a circle. The proposed solution uses the actor-critic reinforcement learning method to achieve consensus, when there is no communication between the agents. A performance index is defined that take into consider the difference in angular position between the neighbouring agents. The actions of each agent while achieving consensus with full communication is learned by an actor neural network, while the critic neural network learns to predict the performance index. The proposed solution is validated by a numerical simulation with live agents moving along the periphery of a circle.
引用
收藏
页码:1461 / 1465
页数:5
相关论文
共 50 条
  • [11] Actor-critic reinforcement learning for bidding in bilateral negotiation
    Arslan, Furkan
    Aydogan, Reyhan
    TURKISH JOURNAL OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCES, 2022, 30 (05) : 1695 - 1714
  • [12] USING ACTOR-CRITIC REINFORCEMENT LEARNING FOR CONTROL AND FLIGHT FORMATION OF QUADROTORS
    Torres, Edgar
    Xu, Lei
    Sardarmehni, Tohid
    PROCEEDINGS OF ASME 2022 INTERNATIONAL MECHANICAL ENGINEERING CONGRESS AND EXPOSITION, IMECE2022, VOL 5, 2022,
  • [13] Actor-Critic Reinforcement Learning for Control With Stability Guarantee
    Han, Minghao
    Zhang, Lixian
    Wang, Jun
    Pan, Wei
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2020, 5 (04) : 6217 - 6224
  • [14] MARS: Malleable Actor-Critic Reinforcement Learning Scheduler
    Baheri, Betis
    Tronge, Jacob
    Fang, Bo
    Li, Ang
    Chaudhary, Vipin
    Guan, Qiang
    2022 IEEE INTERNATIONAL PERFORMANCE, COMPUTING, AND COMMUNICATIONS CONFERENCE, IPCCC, 2022,
  • [15] Fleet planning under demand and fuel price uncertainty using actor-critic reinforcement learning
    Geursen, Izaak L.
    Santos, Bruno F.
    Yorke-Smith, Neil
    JOURNAL OF AIR TRANSPORT MANAGEMENT, 2023, 109
  • [16] Forward Actor-Critic for Nonlinear Function Approximation in Reinforcement Learning
    Veeriah, Vivek
    van Seijen, Harm
    Sutton, Richard S.
    AAMAS'17: PROCEEDINGS OF THE 16TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS, 2017, : 556 - 564
  • [17] Automatic collective motion tuning using actor-critic deep reinforcement learning
    Abpeikar, Shadi
    Kasmarik, Kathryn
    Garratt, Matthew
    Hunjet, Robert
    Khan, Md Mohiuddin
    Qiu, Huanneng
    SWARM AND EVOLUTIONARY COMPUTATION, 2022, 72
  • [18] Manipulator Motion Planning based on Actor-Critic Reinforcement Learning
    Li, Qiang
    Nie, Jun
    Wang, Haixia
    Lu, Xiao
    Song, Shibin
    2021 PROCEEDINGS OF THE 40TH CHINESE CONTROL CONFERENCE (CCC), 2021, : 4248 - 4254
  • [19] A Prioritized objective actor-critic method for deep reinforcement learning
    Ngoc Duy Nguyen
    Thanh Thi Nguyen
    Peter Vamplew
    Richard Dazeley
    Saeid Nahavandi
    Neural Computing and Applications, 2021, 33 : 10335 - 10349
  • [20] UAV Assisted Cooperative Caching on Network Edge Using Multi-Agent Actor-Critic Reinforcement Learning
    Araf, Sadman
    Saha, Adittya Soukarjya
    Kazi, Sadia Hamid
    Tran, Nguyen H. H.
    Alam, Md. Golam Rabiul
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2023, 72 (02) : 2322 - 2337