Learning to Navigate in Turbulent Flows With Aerial Robot Swarms: A Cooperative Deep Reinforcement Learning Approach

被引:1
作者
Patino, Diego [1 ]
Mayya, Siddharth [2 ]
Calderon, Juan [3 ,4 ]
Daniilidis, Kostas [1 ]
Saldana, David [5 ]
机构
[1] Univ Penn, GRASP Lab, Philadelphia, PA 19104 USA
[2] Amazon Robot, Cambridge, MA 02141 USA
[3] Univ St Tomas, Bogota 110231, Colombia
[4] Bethune Cookman Univ, Daytona Beach, FL 32114 USA
[5] Lehigh Univ, Autonomous & Intelligent Robot Lab AIRLab, Bethlehem, PA 18015 USA
关键词
Robots; Robot kinematics; Robot sensing systems; Wind; Navigation; Force; Drag; Swarm robotics; reinforcement learning; wind turbulence; machine learning for robot control; graph neural networks; NEURAL-NETWORKS; FIELDS;
D O I
10.1109/LRA.2023.3280806
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Aerial operation in turbulent environments is a challenging problem due to the chaotic behavior of the flow. This problem is made even more complex when a team of aerial robots is trying to achieve coordinated motion in turbulent wind conditions. In this letter, we present a novel multi-robot controller to navigate in turbulent flows, decoupling the trajectory-tracking control from the turbulence compensation via a nested control architecture. Unlike previous works, our method does not learn to compensate for the air-flow at a specific time and space. Instead, our method learns to compensate for the flow based on its effect on the team. This is made possible via a deep reinforcement learning approach, implemented via a Graph Convolutional Neural Network (GCNN)-based architecture, which enables robots to achieve better wind compensation by processing the spatial-temporal correlation of wind flows across the team. Our approach scales well to large robot teams -as each robot only uses information from its nearest neighbors-, and generalizes well to robot teams larger than seen in training. Simulated experiments demonstrate how information sharing improves turbulence compensation in a team of aerial robots and demonstrate the flexibility of our method over different team configurations.
引用
收藏
页码:4219 / 4226
页数:8
相关论文
共 50 条
  • [1] Learning to Navigate for Mobile Robot with Continual Reinforcement Learning
    Wang, Ning
    Zhang, Dingyuan
    Wang, Yong
    PROCEEDINGS OF THE 39TH CHINESE CONTROL CONFERENCE, 2020, : 3701 - 3706
  • [2] Reinforcement learning-based aggregation for robot swarms
    Amjadi, Arash Sadeghi
    Bilaloglu, Cem
    Turgut, Ali Emre
    Na, Seongin
    Sahin, Erol
    Krajnik, Tomas
    Arvin, Farshad
    ADAPTIVE BEHAVIOR, 2024, 32 (03) : 265 - 281
  • [3] Cooperative Deep Reinforcement Learning Policies for Autonomous Navigation in Complex Environments
    Tran, Van Manh
    Kim, Gon-Woo
    IEEE ACCESS, 2024, 12 : 101053 - 101065
  • [4] Learn to Navigate Autonomously Through Deep Reinforcement Learning
    Wu, Keyu
    Wang, Han
    Esfahani, Mahdi Abolfazli
    Yuan, Shenghai
    IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2022, 69 (05) : 5342 - 5352
  • [5] Visuomotor Reinforcement Learning for Multirobot Cooperative Navigation
    Liu, Zhe
    Liu, Qiming
    Tang, Ling
    Jin, Kefan
    Wang, Hongye
    Liu, Ming
    Wang, Hesheng
    IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 2022, 19 (04) : 3234 - 3245
  • [6] Learning to Navigate in Human Environments via Deep Reinforcement Learning
    Gao, Xingyuan
    Sun, Shiying
    Zhao, Xiaoguang
    Tan, Min
    NEURAL INFORMATION PROCESSING (ICONIP 2019), PT I, 2019, 11953 : 418 - 429
  • [7] Learning to Navigate Through Complex Dynamic Environment With Modular Deep Reinforcement Learning
    Wang, Yuanda
    He, Haibo
    Sun, Changyin
    IEEE TRANSACTIONS ON GAMES, 2018, 10 (04) : 400 - 412
  • [8] Deep learning and reinforcement learning approach on microgrid
    Chandrasekaran, Kumar
    Kandasamy, Prabaakaran
    Ramanathan, Srividhya
    INTERNATIONAL TRANSACTIONS ON ELECTRICAL ENERGY SYSTEMS, 2020, 30 (10):
  • [9] Table-Balancing Cooperative Robot Based on Deep Reinforcement Learning
    Kim, Yewon
    Kim, Dae-Won
    Kang, Bo-Yeong
    SENSORS, 2023, 23 (11)
  • [10] Cooperative Spectrum Sensing Meets Machine Learning: Deep Reinforcement Learning Approach
    Sarikhani, Rahil
    Keynia, Farshid
    IEEE COMMUNICATIONS LETTERS, 2020, 24 (07) : 1459 - 1462