Semantic SLAM for Mobile Robot with Human-in-the-Loop

被引:1
作者
Ouyang, Zhenchao [1 ,2 ]
Zhang, Changjie [2 ]
Cui, Jiahe [1 ,2 ]
机构
[1] Beihang Univ, Beihang Hangzhou Innovat Inst Yuhang, Hangzhou 311100, Zhejiang, Peoples R China
[2] Beihang Univ, Sch Comp Sci & Engn, Beijing 100191, Peoples R China
来源
COLLABORATIVE COMPUTING: NETWORKING, APPLICATIONS AND WORKSHARING, COLLABORATECOM 2022, PT II | 2022年 / 461卷
关键词
Semantic SLAM; Robot; Point cloud segmentation; Human-in-the-loop; Interactive SLAM; MAP;
D O I
10.1007/978-3-031-24386-8_16
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Mobile robots are an important participant in today's modern life, and have huge commercial application prospects in the fields of unmanned security inspection, logistics, express delivery, cleaning and medical disinfection. Since LiDAR is not affected by ambient light and can operate in a dark environment, localization and navigation based on LiDAR point clouds have become one of the basic modules of mobile robots. However, compared with traditional binocular vision images, the sparse, disordered and noisy point cloud poses a challenge to efficient and stable feature extraction. This makes the LiDAR-based SLAM have more significant cumulative errors, and poor consistency of the final map, which affects tasks such as positioning based on the prior point cloud map. In order to alleviate the above problems and improve the positioning accuracy, a semantic SLAM with human-in-the-loop is proposed. First, the interactive SLAM is introduced to optimize the point cloud pose to obtain a highly consistent point cloud map; then the point cloud segmentation model is trained by artificial semantic annotation to obtain the semantic information of a single frame of point cloud; finally, the positioning accuracy is optimized based on the point cloud semantics. The proposed system is validated on the local platform in an underground garage, without involving GPS or expensive measuring equipment.
引用
收藏
页码:289 / 305
页数:17
相关论文
共 42 条
[1]   UWB Localization in a Smart Factory: Augmentation Methods and Experimental Assessment [J].
Barbieri, Luca ;
Brambilla, Mattia ;
Trabattoni, Andrea ;
Mervic, Stefano ;
Nicoli, Monica .
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2021, 70
[2]   SemanticKITTI: A Dataset for Semantic Scene Understanding of LiDAR Sequences [J].
Behley, Jens ;
Garbade, Martin ;
Milioto, Andres ;
Quenzel, Jan ;
Behnke, Sven ;
Stachniss, Cyrill ;
Gall, Juergen .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :9296-9306
[3]   ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual-Inertial, and Multimap SLAM [J].
Campos, Carlos ;
Elvira, Richard ;
Gomez Rodriguez, Juan J. ;
Montiel, Jose M. M. ;
Tardos, Juan D. .
IEEE TRANSACTIONS ON ROBOTICS, 2021, 37 (06) :1874-1890
[4]   Range Image-based LiDAR Localization for Autonomous Vehicles [J].
Chen, Xieyuanli ;
Vizzo, Ignacio ;
Labe, Thomas ;
Behley, Jens ;
Stachniss, Cyrill .
2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, :5802-5808
[5]  
Chen XYL, 2020, ROBOTICS: SCIENCE AND SYSTEMS XVI
[6]  
Chen XYL, 2019, IEEE INT C INT ROBOT, P4530, DOI 10.1109/IROS40897.2019.8967704
[7]  
Cui JH, 2020, Arxiv, DOI arXiv:2011.08516
[8]   Fast Segmentation-Based Object Tracking Model for Autonomous Vehicles [J].
Dong, Xiaoyun ;
Niu, Jianwei ;
Cui, Jiahe ;
Fu, Zongkai ;
Ouyang, Zhenchao .
ALGORITHMS AND ARCHITECTURES FOR PARALLEL PROCESSING, ICA3PP 2020, PT II, 2020, 12453 :259-273
[9]   Semantic SLAM With More Accurate Point Cloud Map in Dynamic Environments [J].
Fan, Yingchun ;
Zhang, Qichi ;
Liu, Shaofeng ;
Tang, Yuliang ;
Jing, Xin ;
Yao, Jintao ;
Han, Hong .
IEEE ACCESS, 2020, 8 :112237-112252
[10]  
Graeter J, 2018, IEEE INT C INT ROBOT, P7872, DOI 10.1109/IROS.2018.8594394