Deep-Reinforcement-Learning-Based Collision Avoidance in UAV Environment

被引:45
|
作者
Ouahouah, Sihem [1 ]
Bagaa, Miloud [1 ]
Prados-Garzon, Jonathan [2 ]
Taleb, Tarik [1 ,3 ]
机构
[1] Aalto Univ, Sch Elect Engn, Dept Commun & Networking, Espoo 00076, Finland
[2] Univ Granada, Dept Signal Theory Telemat & Commun, Granada 18014, Spain
[3] Sejong Univ, Dept Comp & Informat Secur, Seoul 05006, South Korea
关键词
Sensors; Unmanned aerial vehicles; Collision avoidance; Reinforcement learning; Vehicular ad hoc networks; Regulation; Industries; deep reinforcement learning; machine learning; multiaccess-edge computing (MEC); unmanned aerial vehicles (UAVs);
D O I
10.1109/JIOT.2021.3118949
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Unmanned aerial vehicles (UAVs) have recently attracted both academia and industry representatives due to their utilization in tremendous emerging applications. Most UAV applications adopt visual line of sight (VLOS) due to ongoing regulations. There is a consensus between industry for extending UAVs' commercial operations to cover the urban and populated area-controlled airspace beyond VLOS (BVLOS). There is ongoing regulation for enabling BVLOS UAV management. Regrettably, this comes with unavoidable challenges related to UAVs' autonomy for detecting and avoiding static and mobile objects. An intelligent component should either be deployed onboard the UAV or at a multiaccess-edge computing (MEC) that can read the gathered data from different UAV's sensors, process them, and then make the right decision to detect and avoid the physical collision. The sensing data should be collected using various sensors but not limited to Lidar, depth camera, video, or ultrasonic. This article proposes probabilistic and deep-reinforcement-learning (DRL)-based algorithms for avoiding collisions while saving energy consumption. The proposed algorithms can be either run on top of the UAV or at the MEC according to the UAV capacity and the task overhead. We have designed and developed our algorithms to work for any environment without a need for any prior knowledge. The proposed solutions have been evaluated in a harsh environment that consists of many UAVs moving randomly in a small area without any correlation. The obtained results demonstrated the efficiency of these solutions for avoiding the collision while saving energy consumption in familiar and unfamiliar environments.
引用
收藏
页码:4015 / 4030
页数:16
相关论文
共 50 条
  • [21] A learning method for AUV collision avoidance through deep reinforcement learning
    Xu, Jian
    Huang, Fei
    Wu, Di
    Cui, Yunfei
    Yan, Zheping
    Du, Xue
    OCEAN ENGINEERING, 2022, 260
  • [22] Taming an Autonomous Surface Vehicle for Path Following and Collision Avoidance Using Deep Reinforcement Learning
    Meyer, Eivind
    Robinson, Haakon
    Rasheed, Adil
    San, Omer
    IEEE ACCESS, 2020, 8 : 41466 - 41481
  • [23] Collision avoidance for a small drone with a monocular camera using deep reinforcement learning in an indoor environment
    Kim M.
    Kim J.
    Jung M.
    Oh H.
    Journal of Institute of Control, Robotics and Systems, 2020, 26 (06) : 399 - 411
  • [24] DEEP REINFORCEMENT LEARNING FOR SHIP COLLISION AVOIDANCE AND PATH TRACKING
    Singht, Amar Nath
    Vijayakumar, Akash
    Balasubramaniyam, Shankruth
    Somayajula, Abhilash
    PROCEEDINGS OF ASME 2024 43RD INTERNATIONAL CONFERENCE ON OCEAN, OFFSHORE AND ARCTIC ENGINEERING, OMAE2024, VOL 5B, 2024,
  • [25] Ship Collision Avoidance Using Constrained Deep Reinforcement Learning
    Zhang, Rui
    Wang, Xiao
    Liu, Kezhong
    Wu, Xiaolie
    Lu, Tianyou
    Chao Zhaohui
    2018 5TH INTERNATIONAL CONFERENCE ON BEHAVIORAL, ECONOMIC, AND SOCIO-CULTURAL COMPUTING (BESC), 2018, : 115 - 120
  • [26] Smooth Trajectory Collision Avoidance through Deep Reinforcement Learning
    Song, Sirui
    Saunders, Kirk
    Yue, Ye
    Liu, Jundong
    2022 21ST IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS, ICMLA, 2022, : 914 - 919
  • [27] A UAV Indoor Obstacle Avoidance System Based on Deep Reinforcement Learning
    Lo, Chun-Huang
    Lee, Chung-Nan
    2023 ASIA PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE, APSIPA ASC, 2023, : 2137 - 2143
  • [28] UAV intelligent avoidance decisions based on deep reinforcement learning algorithm
    Wu F.
    Tao W.
    Li H.
    Zhang J.
    Zheng C.
    Xi Tong Gong Cheng Yu Dian Zi Ji Shu/Systems Engineering and Electronics, 2023, 45 (06): : 1702 - 1711
  • [29] Integrating human experience in deep reinforcement learning for multi-UAV collision detection and avoidance
    Wang, Guanzheng
    Xu, Yinbo
    Liu, Zhihong
    Xu, Xin
    Wang, Xiangke
    Yan, Jiarun
    INDUSTRIAL ROBOT-THE INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH AND APPLICATION, 2022, 49 (02): : 256 - 270
  • [30] Deep reinforcement learning based collision avoidance system for autonomous ships
    Wang, Yong
    Xu, Haixiang
    Feng, Hui
    He, Jianhua
    Yang, Haojie
    Li, Fen
    Yang, Zhen
    OCEAN ENGINEERING, 2024, 292