Reinforcement Learning Based Trajectory Planning for Multi-UAV Load Transportation

被引:0
|
作者
Estevez, Julian [1 ]
Manuel Lopez-Guede, Jose [2 ]
del Valle-Echavarri, Javier [2 ]
Grana, Manuel [3 ]
机构
[1] Univ Basque Country UPV EHU, Fac Engn Gipuzkoa, Grp Computat Intelligence, Donostia San Sebastian 20018, Spain
[2] Univ Basque Country, Fac Engn Vitoria, Grp Computat Intelligence, Vitoria 01006, Spain
[3] Univ Basque Country, Fac Comp Sci, Grp Computat Intelligence, Donostia San Sebastian 20018, Spain
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Aerial robots; payload; reinforcement learning; UAVs; QUADROTOR;
D O I
10.1109/ACCESS.2024.3470509
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This study introduces a novel trajectory planning approach for the transportation of cable-suspended loads employing three quadrotors, relying on a reinforcement learning (RL) algorithm. The primary objective of this path planning method is to transport the cargo smoothly while avoiding its swing. Within this proposed solution, the value function of the RL is estimated through a feature vector and a parameter vector tailored to the specific problem. The parameter vector undergoes iterative updates via a batch method, subsequently guiding the generation of the desired trajectory through a greedy strategy. Ultimately, this desired trajectory is communicated to the quadrotor controller to ensure precise trajectory tracking. Simulation outcomes demonstrate the capability of the trained parameters to effectively fit the value function.
引用
收藏
页码:144009 / 144016
页数:8
相关论文
共 50 条
  • [11] Multi-UAV Adaptive Cooperative Formation Trajectory Planning Based on an Improved MATD3 Algorithm of Deep Reinforcement Learning
    Xing, Xiaojun
    Zhou, Zhiwei
    Li, Yan
    Xiao, Bing
    Xun, Yilin
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2024, 73 (09) : 12484 - 12499
  • [12] Multi-UAV Cooperation and Control for Load Transportation and Deployment
    I. Maza
    K. Kondak
    M. Bernard
    A. Ollero
    Journal of Intelligent and Robotic Systems, 2010, 57 : 417 - 449
  • [13] Multi-UAV Cooperation and Control for Load Transportation and Deployment
    Maza, I.
    Kondak, K.
    Bernard, M.
    Ollero, A.
    JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS, 2010, 57 (1-4) : 417 - 449
  • [14] On Collaborative Multi-UAV Trajectory Planning for Data Collection
    Rahim, Shahnila
    Peng, Limei
    Chang, Shihyu
    Ho, Pin-Han
    JOURNAL OF COMMUNICATIONS AND NETWORKS, 2023, 25 (06) : 722 - 733
  • [15] Multi-UAV Path Planning for Wireless Data Harvesting With Deep Reinforcement Learning
    Bayerlein, Harald
    Theile, Mirco
    Caccamo, Marco
    Gesbert, David
    IEEE OPEN JOURNAL OF THE COMMUNICATIONS SOCIETY, 2021, 2 : 1171 - 1187
  • [16] MULTI-UAV COOPERATIVE TRANSPORTATION USING DYNAMIC CONTROL ALLOCATION AND A REINFORCEMENT LEARNING COMPENSATOR
    Li, Shuai
    Zanotto, Damiano
    PROCEEDINGS OF ASME 2021 INTERNATIONAL DESIGN ENGINEERING TECHNICAL CONFERENCES AND COMPUTERS AND INFORMATION IN ENGINEERING CONFERENCE, IDETC-CIE2021, VOL 9, 2021,
  • [17] Trajectory Design and Resource Allocation for Multi-UAV Networks: Deep Reinforcement Learning Approaches
    Chang, Zheng
    Deng, Hengwei
    You, Li
    Min, Geyong
    Garg, Sahil
    Kaddoum, Georges
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2023, 10 (05): : 2940 - 2951
  • [18] Deep Reinforcement Learning Approach for Joint Trajectory Design in Multi-UAV IoT Networks
    Xu, Shu
    Zhan, Xiangyu
    Li, Chunguo
    Wang, Dongming
    Yang, Luxi
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2022, 71 (03) : 3389 - 3394
  • [19] Joint Optimization of Multi-UAV Target Assignment and Path Planning Based on Multi-Agent Reinforcement Learning
    Qie, Han
    Shi, Dianxi
    Shen, Tianlong
    Xu, Xinhai
    Li, Yuan
    Wang, Liujing
    IEEE ACCESS, 2019, 7 : 146264 - 146272
  • [20] A Method of Multi-UAV Cooperative Task Assignment Based on Reinforcement Learning
    Zhao, Xiaohu
    Jiang, Hanli
    An, Chenyang
    Wu, Ruocheng
    Guo, Yijun
    Yang, Daquan
    MOBILE INFORMATION SYSTEMS, 2022, 2022