A Hierarchical Deep Reinforcement Learning Framework for 6-DOF UCAV Air-to-Air Combat

被引:33
作者
Chai, Jiajun [1 ,2 ]
Chen, Wenzhang [1 ,2 ]
Zhu, Yuanheng [1 ,2 ]
Yao, Zong-Xin [3 ]
Zhao, Dongbin [1 ,2 ]
机构
[1] Chinese Acad Sci, Inst Automat, State Key Lab Multimodal Artificial Intelligence S, Beijing 100190, Peoples R China
[2] Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100049, Peoples R China
[3] Shenyang Aircraft Design & Res Inst, Dept Unmanned Aerial Vehicle, Shenyang 110035, Peoples R China
来源
IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS | 2023年 / 53卷 / 09期
基金
中国国家自然科学基金;
关键词
Aircraft; Aerospace control; 6-DOF; Task analysis; Nose; Missiles; Heuristic algorithms; 6-DOF unmanned combat air vehicle (UCAV); air combat; hierarchical structure; reinforcement learning (RL); self-play; LEVEL; GAME;
D O I
10.1109/TSMC.2023.3270444
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Unmanned combat air vehicle (UCAV) combat is a challenging scenario with high-dimensional continuous state and action space and highly nonlinear dynamics. In this article, we propose a general hierarchical framework to resolve the within-vision-range (WVR) air-to-air combat problem under six dimensions of degree (6-DOF) dynamics. The core idea is to divide the whole decision-making process into two loops and use reinforcement learning (RL) to solve them separately. The outer loop uses a combat policy to decide the macro command according to the current combat situation. Then the inner loop uses a control policy to answer the macro command by calculating the actual input signals for the aircraft. We design the Markov decision-making process for the control policy and the Markov game between two aircraft. We present a two-stage training mechanism. For the control policy, we design an effective reward function to accurately track various macro behaviors. For the combat policy, we present a fictitious self-play mechanism to improve the combat performance by combating against the historical combat policies. Experiment results show that the control policy can achieve better tracking performance than conventional methods. The fictitious self-play mechanism can learn competitive combat policy, which can achieve high winning rates against conventional methods.
引用
收藏
页码:5417 / 5429
页数:13
相关论文
共 50 条
[21]   Enhancing multi-UAV air combat decision making via hierarchical reinforcement learning [J].
Wang, Huan ;
Wang, Jintao .
SCIENTIFIC REPORTS, 2024, 14 (01)
[22]   Air-to-Air Visual Detection of Micro-UAVs: An Experimental Evaluation of Deep Learning [J].
Zheng, Ye ;
Chen, Zhang ;
Lv, Dailin ;
Li, Zhixing ;
Lan, Zhenzhong ;
Zhao, Shiyu .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2021, 6 (02) :1020-1027
[23]   A sample selection mechanism for multi-UCAV air combat policy training using multi-agent reinforcement learning [J].
Yan, Zihui ;
Liang, Xiaolong ;
Hou, Yueqi ;
Yang, Aiwu ;
Zhang, Jiaqiang ;
Wang, Ning .
CHINESE JOURNAL OF AERONAUTICS, 2025, 38 (06)
[24]   Beyond-Visual-Range Air Combat Tactics Auto-Generation by Reinforcement Learning [J].
Piao, Haiyin ;
Sun, Zhixiao ;
Meng, Guanglei ;
Chen, Hechang ;
Qu, Bohao ;
Lang, Kuijun ;
Sun, Yang ;
Yang, Shengqi ;
Peng, Xuanqi .
2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
[25]   Air-Combat Strategy Using Deep Q-Learning [J].
Ma, Xiaoteng ;
Xia, Li ;
Zhao, Qianchuan .
2018 CHINESE AUTOMATION CONGRESS (CAC), 2018, :3952-3957
[26]   H3E: Learning air combat with a three-level hierarchical framework embedding expert knowledge [J].
Qian, Chenxu ;
Zhang, Xuebo ;
Li, Lun ;
Zhao, Minghui ;
Fang, Yongchun .
EXPERT SYSTEMS WITH APPLICATIONS, 2024, 245
[27]   Autonomous Control of Combat Unmanned Aerial Vehicles to Evade Surface-to-Air Missiles Using Deep Reinforcement Learning [J].
Lee, Gyeong Taek ;
Kim, Chang Ouk .
IEEE ACCESS, 2020, 8 :226724-226736
[28]   Hierarchical multi-agent reinforcement learning for multi-aircraft close-range air combat [J].
Kong, Wei-ren ;
Zhou, De-yun ;
Du, Yong-jie ;
Zhou, Ying ;
Zhao, Yi-yang .
IET CONTROL THEORY AND APPLICATIONS, 2023, 17 (13) :1840-1862
[29]   An air combat maneuver decision-making approach using coupled reward in deep reinforcement learning [J].
Yang, Jian ;
Wang, Liangpei ;
Han, Jiale ;
Chen, Changdi ;
Yuan, Yinlong ;
Yu, Zhu Liang ;
Yang, Guoli .
COMPLEX & INTELLIGENT SYSTEMS, 2025, 11 (08)
[30]   Enhancing BVR Air Combat Agent Development With Attention-Driven Reinforcement Learning [J].
Kuroswiski, Andre R. ;
Wu, Annie S. ;
Passaro, Angelo .
IEEE ACCESS, 2025, 13 :70446-70463