Hierarchical Adaptation of Multiagent Deep Reinforcement Learning for Multi-Domain Uncrewed Aerial and Ground Vehicle Coordination

被引:0
作者
Hulede, Ian Ellis L. [1 ]
Mukherjee, Amitav [1 ]
Ashdown, Jonathan [2 ]
机构
[1] Tiami Networks, Elk Grove, CA 95624 USA
[2] Air Force Res Lab, Wright Patterson AFB, OH USA
来源
MILCOM 2024-2024 IEEE MILITARY COMMUNICATIONS CONFERENCE, MILCOM | 2024年
关键词
Reinforcement Learning; Multiagent Reinforcement Learning; multi-agent system; fusion multiactor-attention-critic;
D O I
10.1109/MILCOM61039.2024.10773714
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The coordination of Uncrewed Aerial Vehicles (UAVs) and Uncrewed Ground Vehicles (UGVs) enhances several commercial and military applications, yet it confronts specific challenges, including restricted vertical access, limited operational range, and power constraints, alongside the necessity of maintaining robust communication links between vehicles. This work introduces a hierarchical adaptation of a multi-agent deep reinforcement learning (MADRL) model, termed Fusion Multi-Actor-Attention-Critic (F-MAAC), engineered for the adept management of UAVs and UGVs in intricate settings. The proposed model combines a high-level partner selector with a low-level action selector, employing temporal abstraction and sophisticated communication protocols to amplify operational efficacy and precision in task execution. Notably, the integration enables a Lead UAV Agent to efficiently coordinate the surveillance of environmental variables that are beyond the perceptual field of the UGVs. Comparative analysis between our proposed model and other models underscores the superior performance and enhanced coordination facilitated by the hierarchical F-MAAC model, advocating its utility in complex operational contexts.
引用
收藏
页码:487 / 492
页数:6
相关论文
共 16 条
[1]   Multi-Agent Pattern Formation: a Distributed Model-Free Deep Reinforcement Learning Approach [J].
Diallo, Elhadji Amadou Oury ;
Sugawara, Toshiharu .
2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
[2]  
Glorennec P Y., 2000, ESIT, P14
[3]   Defensive Escort Teams for Navigation in Crowds via Multi-Agent Deep Reinforcement Learning [J].
Hasan, Yazied A. ;
Garg, Arpit ;
Sugaya, Satomi ;
Tapia, Lydia .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2020, 5 (04) :5645-5652
[4]   Multiagent Reinforcement Learning Based on Fusion-Multiactor-Attention-Critic for Multiple-Unmanned-Aerial-Vehicle Navigation Control [J].
Jeon, Sangwoo ;
Lee, Hoeun ;
Kaliappan, Vishnu Kumar ;
Nguyen, Tuan Anh ;
Jo, Hyungeun ;
Cho, Hyeonseo ;
Min, Dugki .
ENERGIES, 2022, 15 (19)
[5]  
Juliani A, 2020, Arxiv, DOI arXiv:1809.02627
[6]   Hierarchical Deep Reinforcement Learning for Multi-robot Cooperation in Partially Observable Environment [J].
Liang, Zhixuan ;
Cao, Jiannong ;
Lin, Wanyu ;
Chen, Jinlin ;
Xu, Huafeng .
2021 IEEE THIRD INTERNATIONAL CONFERENCE ON COGNITIVE MACHINE INTELLIGENCE (COGMI 2021), 2021, :272-281
[7]  
Littman Michael L., 1994, Machine learning proceedings 1994, P157
[8]   Human-level control through deep reinforcement learning [J].
Mnih, Volodymyr ;
Kavukcuoglu, Koray ;
Silver, David ;
Rusu, Andrei A. ;
Veness, Joel ;
Bellemare, Marc G. ;
Graves, Alex ;
Riedmiller, Martin ;
Fidjeland, Andreas K. ;
Ostrovski, Georg ;
Petersen, Stig ;
Beattie, Charles ;
Sadik, Amir ;
Antonoglou, Ioannis ;
King, Helen ;
Kumaran, Dharshan ;
Wierstra, Daan ;
Legg, Shane ;
Hassabis, Demis .
NATURE, 2015, 518 (7540) :529-533
[9]   Deep Reinforcement Learning Robot for Search and Rescue Applications: Exploration in Unknown Cluttered Environments [J].
Niroui, Farzad ;
Zhang, Kaicheng ;
Kashino, Zendai ;
Nejat, Goldie .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2019, 4 (02) :610-617
[10]   Neural Tree Expansion for Multi-Robot Planning in Non-Cooperative Environments [J].
Riviere, Benjamin ;
Honig, Wolfgang ;
Anderson, Matthew ;
Chung, Soon-Jo .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2021, 6 (04) :6868-6875