Research on Efficient Multiagent Reinforcement Learning for Multiple UAVs' Distributed Jamming Strategy

被引:0
作者
Ran, Weizhi [1 ]
Luo, Rong [2 ]
Zhang, Funing [3 ]
Luo, Renwei [1 ]
Xu, Yang [1 ]
机构
[1] Univ Elect Sci & Technol China, Sch Comp Sci & Engn, Chengdu 611731, Peoples R China
[2] PLA, Naval Res Acad, Beijing 100161, Peoples R China
[3] Xian Univ Architecture & Technol, Sch Informat & Control Engn, Xian 710055, Peoples R China
关键词
multiagent reinforcement learning; IPPO learning algorithm; multiple UAVs distributed jamming strategy; LEVEL; GAME; GO;
D O I
10.3390/electronics12183874
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
To support Unmanned Aerial Vehicle (UAV) joint electromagnetic countermeasure decisions in real time, coordinating multiple UAVs for efficiently jamming distributed hostile radar stations requires complex and highly flexible strategies. However, with the nature of the high complexity dimension and partial observation of the electromagnetic battleground, no such strategy can be generated by pre-coded software or decided by a human commander. In this paper, an initial effort is made to integrate multiagent reinforcement learning, which has been proven to be effective in game strategy generation, into the distributed airborne electromagnetic countermeasures domain. The key idea is to design a training simulator which close to a real electromagnetic countermeasure strategy game, so that we can easily collect huge valuable training data other than in the real battle ground which is sparse and far less than sufficient. In addition, this simulator is able to simulate all the necessary decision factors for multiple UAV coordination, so that multiagents can freely search for their optimal joint strategies with our improved Independent Proximal Policy Optimization (IPPO) learning algorithm which suits the game well. In the last part, a typical domain scenario is built to test, and the use case and experiment results manifest that the design is efficient in coordinating a group of UAVs equipped with lightweight jamming devices. Their coordination strategies are not only capable of handling given jamming tasks for the dynamic jamming of hostile radar stations but also beat expectations. The reinforcement learning algorithm can do some heuristic searches to help the group find the tactical vulnerabilities of the enemies and improve the multiple UAVs' jamming performance.
引用
收藏
页数:12
相关论文
共 26 条
  • [1] Deep Reinforcement Learning A brief survey
    Arulkumaran, Kai
    Deisenroth, Marc Peter
    Brundage, Miles
    Bharath, Anil Anthony
    [J]. IEEE SIGNAL PROCESSING MAGAZINE, 2017, 34 (06) : 26 - 38
  • [2] Busoniu L, 2010, STUD COMPUT INTELL, V310, P183
  • [3] Hessel M, 2018, AAAI CONF ARTIF INTE, P3215
  • [4] Reinforcement learning: A survey
    Kaelbling, LP
    Littman, ML
    Moore, AW
    [J]. JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 1996, 4 : 237 - 285
  • [5] Källström J, 2020, IEEE SYS MAN CYBERN, P2157, DOI [10.1109/smc42975.2020.9283492, 10.1109/SMC42975.2020.9283492]
  • [6] Collaborative Decision-Making Method for Multi-UAV Based on Multiagent Reinforcement Learning
    Li, Shaowei
    Jia, Yuhong
    Yang, Fan
    Qin, Qingyang
    Gao, Hui
    Zhou, Yaoming
    [J]. IEEE ACCESS, 2022, 10 : 91385 - 91396
  • [7] The Design of Simulation System for Multi-UAV Cooperative Guidance
    Ma Jifeng
    Zhou Chunmei
    Zhang Shuo
    Dong Wenjie
    Zhang Chunxia
    Lin Jinyong
    [J]. 2015 FIFTH INTERNATIONAL CONFERENCE ON INSTRUMENTATION AND MEASUREMENT, COMPUTER, COMMUNICATION AND CONTROL (IMCCC), 2015, : 1250 - 1254
  • [8] Mnih V, 2016, PR MACH LEARN RES, V48
  • [9] Human-level control through deep reinforcement learning
    Mnih, Volodymyr
    Kavukcuoglu, Koray
    Silver, David
    Rusu, Andrei A.
    Veness, Joel
    Bellemare, Marc G.
    Graves, Alex
    Riedmiller, Martin
    Fidjeland, Andreas K.
    Ostrovski, Georg
    Petersen, Stig
    Beattie, Charles
    Sadik, Amir
    Antonoglou, Ioannis
    King, Helen
    Kumaran, Dharshan
    Wierstra, Daan
    Legg, Shane
    Hassabis, Demis
    [J]. NATURE, 2015, 518 (7540) : 529 - 533
  • [10] Openai C., 2019, arXiv, DOI DOI 10.48550/ARXIV.1912.06680