A Hierarchical Deep Reinforcement Learning Framework for 6-DOF UCAV Air-to-Air Combat

被引:31
作者
Chai, Jiajun [1 ,2 ]
Chen, Wenzhang [1 ,2 ]
Zhu, Yuanheng [1 ,2 ]
Yao, Zong-Xin [3 ]
Zhao, Dongbin [1 ,2 ]
机构
[1] Chinese Acad Sci, Inst Automat, State Key Lab Multimodal Artificial Intelligence S, Beijing 100190, Peoples R China
[2] Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100049, Peoples R China
[3] Shenyang Aircraft Design & Res Inst, Dept Unmanned Aerial Vehicle, Shenyang 110035, Peoples R China
来源
IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS | 2023年 / 53卷 / 09期
基金
中国国家自然科学基金;
关键词
Aircraft; Aerospace control; 6-DOF; Task analysis; Nose; Missiles; Heuristic algorithms; 6-DOF unmanned combat air vehicle (UCAV); air combat; hierarchical structure; reinforcement learning (RL); self-play; LEVEL; GAME;
D O I
10.1109/TSMC.2023.3270444
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Unmanned combat air vehicle (UCAV) combat is a challenging scenario with high-dimensional continuous state and action space and highly nonlinear dynamics. In this article, we propose a general hierarchical framework to resolve the within-vision-range (WVR) air-to-air combat problem under six dimensions of degree (6-DOF) dynamics. The core idea is to divide the whole decision-making process into two loops and use reinforcement learning (RL) to solve them separately. The outer loop uses a combat policy to decide the macro command according to the current combat situation. Then the inner loop uses a control policy to answer the macro command by calculating the actual input signals for the aircraft. We design the Markov decision-making process for the control policy and the Markov game between two aircraft. We present a two-stage training mechanism. For the control policy, we design an effective reward function to accurately track various macro behaviors. For the combat policy, we present a fictitious self-play mechanism to improve the combat performance by combating against the historical combat policies. Experiment results show that the control policy can achieve better tracking performance than conventional methods. The fictitious self-play mechanism can learn competitive combat policy, which can achieve high winning rates against conventional methods.
引用
收藏
页码:5417 / 5429
页数:13
相关论文
共 48 条
[1]   A Neural Adaptive Approach for Active Fault-Tolerant Control Design in UAV [J].
Abbaspour, Alireza ;
Yen, Kang K. ;
Forouzannezhad, Parisa ;
Sargolzaei, Arman .
IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2020, 50 (09) :3401-3411
[2]  
Ahmed W., 2019, Automation Control and Intelligent Systems, V4, P39, DOI [10.11648/j.acis.20190701.15, DOI 10.11648/J.ACIS.20190701.15]
[3]   A flexible rule-based framework for pilot performance analysis in air combat simulation systems [J].
Arar, Omer Faruk ;
Ayan, Kursat .
TURKISH JOURNAL OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCES, 2013, 21 :2397-2415
[4]  
Ashraf A, 2018, INT CONF CONTR AUTO, P367, DOI 10.1109/ICCAIS.2018.8570323
[5]  
Brown G. W., 1951, Act. Anal. Prod Allocation, V13, P374
[6]   UNMAS: Multiagent Reinforcement Learning for Unshaped Cooperative Scenarios [J].
Chai, Jiajun ;
Li, Weifan ;
Zhu, Yuanheng ;
Zhao, Dongbin ;
Ma, Zhe ;
Sun, Kewu ;
Ding, Jishiyu .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (04) :2093-2104
[7]  
Chappell A. R., 1992, Proceedings IEEE/AIAA 11th Digital Avionics Systems Conference (Cat. No.92CH3212-8), P155, DOI 10.1109/DASC.1992.282166
[8]  
Haarnoja T., 2018, INT C MACHINE LEARNI, V80, P1861, DOI DOI 10.48550/ARXIV.1801.01290
[9]  
Heidlauf Peter, 2018, INT WORKSHOP APPL VE, P208
[10]   Autonomous Maneuver Decision Making of Dual-UAV Cooperative Air Combat Based on Deep Reinforcement Learning [J].
Hu, Jinwen ;
Wang, Luhe ;
Hu, Tianmi ;
Guo, Chubing ;
Wang, Yanxiong .
ELECTRONICS, 2022, 11 (03)