Air Channel Planning Based on Improved Deep Q-Learning and Artificial Potential Fields

被引:5
作者
Li, Jie [1 ]
Shen, Di [1 ]
Yu, Fuping [1 ]
Zhang, Renmeng [1 ]
机构
[1] AF Engn Univ, Air Traff Control & Nav Coll, Xian 710051, Peoples R China
关键词
air channel network; path planning; urban air traffic control; DQN;
D O I
10.3390/aerospace10090758
中图分类号
V [航空、航天];
学科分类号
08 ; 0825 ;
摘要
With the rapid advancement of unmanned aerial vehicle (UAV) technology, the widespread utilization of UAVs poses significant challenges to urban low-altitude safety and airspace management. In the coming future, the quantity of drones is expected to experience a substantial surge. Effectively regulating the flight behavior of UAVs has become an urgent and imperative issue that needs to be addressed. Hence, this paper proposes a standardized approach to UAV flight through the design of an air channel network. The air channel network comprises numerous single air channels, and this study focuses on investigating the characteristics of a single air channel. To achieve optimal outcomes, the concept of the artificial potential field algorithm is integrated into the deep Q-learning algorithm during the establishment of a single air channel. By improving the action space and reward mechanism, the resulting single air channel enables efficient avoidance of various buildings and obstacles. Finally, the algorithm is assessed through comprehensive simulation experiments, demonstrating its effective fulfillment of the aforementioned requirements.
引用
收藏
页数:13
相关论文
共 28 条
  • [1] Flower pollination algorithm: a comprehensive review
    Abdel-Basset, Mohamed
    Shawky, Laila A.
    [J]. ARTIFICIAL INTELLIGENCE REVIEW, 2019, 52 (04) : 2533 - 2557
  • [2] Traffic management for drones flying in the city
    Ali, Busyairah Syd
    [J]. INTERNATIONAL JOURNAL OF CRITICAL INFRASTRUCTURE PROTECTION, 2019, 26
  • [3] A review of convolutional neural network architectures and their optimizations
    Cong, Shuang
    Zhou, Yang
    [J]. ARTIFICIAL INTELLIGENCE REVIEW, 2023, 56 (03) : 1905 - 1969
  • [4] DM-DQN: Dueling Munchausen deep Q network for robot path planning
    Gu, Yuwan
    Zhu, Zhitao
    Lv, Jidong
    Shi, Lin
    Hou, Zhenjie
    Xu, Shoukun
    [J]. COMPLEX & INTELLIGENT SYSTEMS, 2023, 9 (04) : 4287 - 4300
  • [5] Path Planning of Coastal Ships Based on Optimized DQN Reward Function
    Guo, Siyu
    Zhang, Xiuguo
    Du, Yiquan
    Zheng, Yisong
    Cao, Zhiying
    [J]. JOURNAL OF MARINE SCIENCE AND ENGINEERING, 2021, 9 (02) : 1 - 23
  • [6] A Survey on Deep Reinforcement Learning Algorithms for Robotic Manipulation
    Han, Dong
    Mulyana, Beni
    Stankovic, Vladimir
    Cheng, Samuel
    [J]. SENSORS, 2023, 23 (07)
  • [7] Q-Learning Algorithms: A Comprehensive Classification and Applications
    Jang, Beakcheol
    Kim, Myeonghwi
    Harerimana, Gaspard
    Kim, Jong Wook
    [J]. IEEE ACCESS, 2019, 7 : 133653 - 133667
  • [8] Machine learning and deep learning
    Janiesch, Christian
    Zschech, Patrick
    Heinrich, Kai
    [J]. ELECTRONIC MARKETS, 2021, 31 (03) : 685 - 695
  • [9] Obstacle-avoidance path planning based on the improved artificial potential field for a 5 degrees of freedom bending robot
    Jiang, Quansheng
    Cai, Kai
    Xu, Fengyu
    [J]. MECHANICAL SCIENCES, 2023, 14 (01) : 87 - 97
  • [10] A review on genetic algorithm: past, present, and future
    Katoch, Sourabh
    Chauhan, Sumit Singh
    Kumar, Vijay
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2021, 80 (05) : 8091 - 8126