Deep-Reinforcement-Learning-Based Collision Avoidance in UAV Environment

被引:45
|
作者
Ouahouah, Sihem [1 ]
Bagaa, Miloud [1 ]
Prados-Garzon, Jonathan [2 ]
Taleb, Tarik [1 ,3 ]
机构
[1] Aalto Univ, Sch Elect Engn, Dept Commun & Networking, Espoo 00076, Finland
[2] Univ Granada, Dept Signal Theory Telemat & Commun, Granada 18014, Spain
[3] Sejong Univ, Dept Comp & Informat Secur, Seoul 05006, South Korea
关键词
Sensors; Unmanned aerial vehicles; Collision avoidance; Reinforcement learning; Vehicular ad hoc networks; Regulation; Industries; deep reinforcement learning; machine learning; multiaccess-edge computing (MEC); unmanned aerial vehicles (UAVs);
D O I
10.1109/JIOT.2021.3118949
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Unmanned aerial vehicles (UAVs) have recently attracted both academia and industry representatives due to their utilization in tremendous emerging applications. Most UAV applications adopt visual line of sight (VLOS) due to ongoing regulations. There is a consensus between industry for extending UAVs' commercial operations to cover the urban and populated area-controlled airspace beyond VLOS (BVLOS). There is ongoing regulation for enabling BVLOS UAV management. Regrettably, this comes with unavoidable challenges related to UAVs' autonomy for detecting and avoiding static and mobile objects. An intelligent component should either be deployed onboard the UAV or at a multiaccess-edge computing (MEC) that can read the gathered data from different UAV's sensors, process them, and then make the right decision to detect and avoid the physical collision. The sensing data should be collected using various sensors but not limited to Lidar, depth camera, video, or ultrasonic. This article proposes probabilistic and deep-reinforcement-learning (DRL)-based algorithms for avoiding collisions while saving energy consumption. The proposed algorithms can be either run on top of the UAV or at the MEC according to the UAV capacity and the task overhead. We have designed and developed our algorithms to work for any environment without a need for any prior knowledge. The proposed solutions have been evaluated in a harsh environment that consists of many UAVs moving randomly in a small area without any correlation. The obtained results demonstrated the efficiency of these solutions for avoiding the collision while saving energy consumption in familiar and unfamiliar environments.
引用
收藏
页码:4015 / 4030
页数:16
相关论文
共 50 条
  • [41] Research on Method of Collision Avoidance Planning for UUV Based on Deep Reinforcement Learning
    Gao, Wei
    Han, Mengxue
    Wang, Zhao
    Deng, Lihui
    Wang, Hongjian
    Ren, Jingfei
    JOURNAL OF MARINE SCIENCE AND ENGINEERING, 2023, 11 (12)
  • [42] Deep-Reinforcement-Learning-Based Computation Offloading in UAV-Assisted Vehicular Edge Computing Networks
    Yan, Junjie
    Zhao, Xiaohui
    Li, Zan
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (11): : 19882 - 19897
  • [43] A learning method for AUV collision avoidance through deep reinforcement learning
    Xu, Jian
    Huang, Fei
    Wu, Di
    Cui, Yunfei
    Yan, Zheping
    Du, Xue
    OCEAN ENGINEERING, 2022, 260
  • [44] Reinforcement Learning-Based Collision Avoidance and Optimal Trajectory Planning in UAV Communication Networks
    Hsu, Yu-Hsin
    Gau, Rung-Hung
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2022, 21 (01) : 306 - 320
  • [45] Collision avoidance for a small drone with a monocular camera using deep reinforcement learning in an indoor environment
    Kim M.
    Kim J.
    Jung M.
    Oh H.
    Journal of Institute of Control, Robotics and Systems, 2020, 26 (06) : 399 - 411
  • [46] Autonomous Vision-Based UAV Landing with Collision Avoidance Using Deep Learning
    Liao, Tianpei
    Haridevan, Amal
    Liu, Yibo
    Shan, Jinjun
    INTELLIGENT COMPUTING, VOL 2, 2022, 507 : 79 - 87
  • [47] Smooth Trajectory Collision Avoidance through Deep Reinforcement Learning
    Song, Sirui
    Saunders, Kirk
    Yue, Ye
    Liu, Jundong
    2022 21ST IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS, ICMLA, 2022, : 914 - 919
  • [48] Deep-Reinforcement-Learning-Based Intelligent Routing Strategy for FANETs
    Lin, Deping
    Peng, Tao
    Zuo, Peiliang
    Wang, Wenbo
    SYMMETRY-BASEL, 2022, 14 (09):
  • [49] Formation Control with Collision Avoidance through Deep Reinforcement Learning
    Sui, Zezhi
    Pu, Zhiqiang
    Yi, Jianqiang
    Xiong, Tianyi
    2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [50] DEEP REINFORCEMENT LEARNING FOR SHIP COLLISION AVOIDANCE AND PATH TRACKING
    Singht, Amar Nath
    Vijayakumar, Akash
    Balasubramaniyam, Shankruth
    Somayajula, Abhilash
    PROCEEDINGS OF ASME 2024 43RD INTERNATIONAL CONFERENCE ON OCEAN, OFFSHORE AND ARCTIC ENGINEERING, OMAE2024, VOL 5B, 2024,