Deep Reinforcement Learning-Based Large-Scale Robot Exploration

被引:1
|
作者
Cao, Yuhong [1 ]
Zhao, Rui [1 ]
Wang, Yizhuo [1 ]
Xiang, Bairan [1 ]
Sartoretti, Guillaume [1 ]
机构
[1] Natl Univ Singapore, Coll Design & Engn, Dept Mech Engn, Singapore 117482, Singapore
关键词
View Planning for SLAM; reinforcement learning; motion and path planning; AUTONOMOUS EXPLORATION; EFFICIENT;
D O I
10.1109/LRA.2024.3379804
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
In this work, we propose a deep reinforcement learning (DRL) based reactive planner to solve large-scale Lidar-based autonomous robot exploration problems in 2D action space. Our DRL-based planner allows the agent to reactively plan its exploration path by making implicit predictions about unknown areas, based on a learned estimation of the underlying transition model of the environment. To this end, our approach relies on learned attention mechanisms for their powerful ability to capture long-term dependencies at different spatial scales to reason about the robot's entire belief over known areas. Our approach relies on ground truth information (i.e., privileged learning) to guide the environment estimation during training, as well as on a graph rarefaction algorithm, which allows models trained in small-scale environments to scale to large-scale ones. Simulation results show that our model exhibits better exploration efficiency (12% in path length, 6% in makespan) and lower planning time (60%) than the state-of-the-art planners in a 130 m x 100 m benchmark scenario. We also validate our learned model on hardware.
引用
收藏
页码:4631 / 4638
页数:8
相关论文
共 50 条
  • [41] Deep learning-based large-scale named entity recognition for anatomical region of mammalian brain
    Chai, Xiaokang
    Di, Yachao
    Feng, Zhao
    Guan, Yue
    Zhang, Guoqing
    Li, Anan
    Luo, Qingming
    QUANTITATIVE BIOLOGY, 2022, 10 (03) : 253 - 263
  • [42] Deep learning-based coagulant dosage prediction for extreme events leveraging large-scale data
    Kim, Jiwoong
    Hua, Chuanbo
    Lin, Subin
    Kang, Seoktae
    Kang, Joo-Hyon
    Park, Mi-Hyun
    JOURNAL OF WATER PROCESS ENGINEERING, 2024, 66
  • [43] A Deep Learning-Based Cluster Analysis Method for Large-Scale Multi-Label Images
    Xu, Yanping
    TRAITEMENT DU SIGNAL, 2022, 39 (03) : 931 - 937
  • [44] Deep learning-based transient stability assessment framework for large-scale modern power system
    Li, Xin
    Liu, Chenkai
    Guo, Panfeng
    Liu, Shengchi
    Ning, Jing
    International Journal of Electrical Power and Energy Systems, 2022, 139
  • [45] A deep learning-based digital twin model for the temperature field of large-scale battery systems
    Shen, Kai
    Ling, Yujia
    Meng, Xiangqi
    Lai, Xin
    Zhu, Zhicheng
    Sun, Tao
    Li, Dawei
    Zheng, Yuejiu
    Wang, Huaibin
    Xu, Chengshan
    Feng, Xuning
    JOURNAL OF ENERGY STORAGE, 2025, 113
  • [46] Reinforcement learning-based aggregation for robot swarms
    Amjadi, Arash Sadeghi
    Bilaloglu, Cem
    Turgut, Ali Emre
    Na, Seongin
    Sahin, Erol
    Krajnik, Tomas
    Arvin, Farshad
    ADAPTIVE BEHAVIOR, 2024, 32 (03) : 265 - 281
  • [47] Reinforcement learning-based mobile robot navigation
    Altuntas, Nihal
    Imal, Erkan
    Emanet, Nahit
    Ozturk, Ceyda Nur
    TURKISH JOURNAL OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCES, 2016, 24 (03) : 1747 - 1767
  • [48] Cooperative Deep Reinforcement Learning for Large-Scale Traffic Grid Signal Control
    Tan, Tian
    Bao, Feng
    Deng, Yue
    Jin, Alex
    Dai, Qionghai
    Wang, Jie
    IEEE TRANSACTIONS ON CYBERNETICS, 2020, 50 (06) : 2687 - 2700
  • [49] Distributed Hierarchical Deep Reinforcement Learning for Large-Scale Grid Emergency Control
    Chen, Yixi
    Zhu, Jizhong
    Liu, Yun
    Zhang, Le
    Zhou, Jialin
    IEEE TRANSACTIONS ON POWER SYSTEMS, 2024, 39 (02) : 4446 - 4458
  • [50] Large-scale Exploration of Neuronal Morphologies Using Deep Learning and Augmented Reality
    Zhongyu Li
    Erik Butler
    Kang Li
    Aidong Lu
    Shuiwang Ji
    Shaoting Zhang
    Neuroinformatics, 2018, 16 : 339 - 349