Deep-Reinforcement-Learning-Based Collision Avoidance in UAV Environment

被引:45
|
作者
Ouahouah, Sihem [1 ]
Bagaa, Miloud [1 ]
Prados-Garzon, Jonathan [2 ]
Taleb, Tarik [1 ,3 ]
机构
[1] Aalto Univ, Sch Elect Engn, Dept Commun & Networking, Espoo 00076, Finland
[2] Univ Granada, Dept Signal Theory Telemat & Commun, Granada 18014, Spain
[3] Sejong Univ, Dept Comp & Informat Secur, Seoul 05006, South Korea
关键词
Sensors; Unmanned aerial vehicles; Collision avoidance; Reinforcement learning; Vehicular ad hoc networks; Regulation; Industries; deep reinforcement learning; machine learning; multiaccess-edge computing (MEC); unmanned aerial vehicles (UAVs);
D O I
10.1109/JIOT.2021.3118949
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Unmanned aerial vehicles (UAVs) have recently attracted both academia and industry representatives due to their utilization in tremendous emerging applications. Most UAV applications adopt visual line of sight (VLOS) due to ongoing regulations. There is a consensus between industry for extending UAVs' commercial operations to cover the urban and populated area-controlled airspace beyond VLOS (BVLOS). There is ongoing regulation for enabling BVLOS UAV management. Regrettably, this comes with unavoidable challenges related to UAVs' autonomy for detecting and avoiding static and mobile objects. An intelligent component should either be deployed onboard the UAV or at a multiaccess-edge computing (MEC) that can read the gathered data from different UAV's sensors, process them, and then make the right decision to detect and avoid the physical collision. The sensing data should be collected using various sensors but not limited to Lidar, depth camera, video, or ultrasonic. This article proposes probabilistic and deep-reinforcement-learning (DRL)-based algorithms for avoiding collisions while saving energy consumption. The proposed algorithms can be either run on top of the UAV or at the MEC according to the UAV capacity and the task overhead. We have designed and developed our algorithms to work for any environment without a need for any prior knowledge. The proposed solutions have been evaluated in a harsh environment that consists of many UAVs moving randomly in a small area without any correlation. The obtained results demonstrated the efficiency of these solutions for avoiding the collision while saving energy consumption in familiar and unfamiliar environments.
引用
收藏
页码:4015 / 4030
页数:16
相关论文
共 50 条
  • [31] Autonomous Obstacle Avoidance and Target Tracking of UAV Based on Deep Reinforcement Learning
    Guoqiang Xu
    Weilai Jiang
    Zhaolei Wang
    Yaonan Wang
    Journal of Intelligent & Robotic Systems, 2022, 104
  • [32] Autonomous Obstacle Avoidance and Target Tracking of UAV Based on Deep Reinforcement Learning
    Xu, Guoqiang
    Jiang, Weilai
    Wang, Zhaolei
    Wang, Yaonan
    JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS, 2022, 104 (04)
  • [33] Inter-UAV Collision Avoidance using Deep-Q-Learning in Flocking Environment
    Raja, Gunasekaran
    Anbalagan, Sudha
    Narayanan, Vikraman Sathiya
    Jayaram, Srinivas
    Ganapathisubramaniyan, Aishwarya
    2019 IEEE 10TH ANNUAL UBIQUITOUS COMPUTING, ELECTRONICS & MOBILE COMMUNICATION CONFERENCE (UEMCON), 2019, : 1089 - 1095
  • [34] Deep-Reinforcement-Learning-Based Placement for Integrated Access Backhauling in UAV-Assisted Wireless Networks
    Wang, Yuhui
    Farooq, Junaid
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (08): : 14727 - 14738
  • [35] A Deep Reinforcement Learning Method for Collision Avoidance with Dense Speed-Constrained Multi-UAV
    Han, Jiale
    Zhu, Yi
    Yang, Jian
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2025, 10 (03): : 2152 - 2159
  • [36] Adaptive Environment Modeling Based Reinforcement Learning for Collision Avoidance in Complex Scenes
    Wang, Shuaijun
    Gao, Rui
    Han, Ruihua
    Chen, Shengduo
    Li, Chengyang
    Hao, Qi
    2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2022, : 9011 - 9018
  • [37] COLREGs-compliant multiship collision avoidance based on deep reinforcement learning
    Zhao, Luman
    Roh, Myung-Il
    OCEAN ENGINEERING, 2019, 191
  • [38] Research on MASS Collision Avoidance in Complex Waters Based on Deep Reinforcement Learning
    Liu, Jiao
    Shi, Guoyou
    Zhu, Kaige
    Shi, Jiahui
    JOURNAL OF MARINE SCIENCE AND ENGINEERING, 2023, 11 (04)
  • [39] Connecting Deep-Reinforcement-Learning-based Obstacle Avoidance with Conventional Global Planners using Waypoint Generators
    Kaestner, Linh
    Zhao, Xinlin
    Buiyan, Teham
    Li, Junhui
    Shen, Zhengcheng
    Lambrecht, Jens
    Marx, Cornelius
    2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2021, : 1213 - 1220
  • [40] Research on Method of Collision Avoidance Planning for UUV Based on Deep Reinforcement Learning
    Gao, Wei
    Han, Mengxue
    Wang, Zhao
    Deng, Lihui
    Wang, Hongjian
    Ren, Jingfei
    JOURNAL OF MARINE SCIENCE AND ENGINEERING, 2023, 11 (12)