SLAM Algorithm for Mobile Robots Based on Improved LVI-SAM in Complex Environments

被引:0
作者
Wang, Wenfeng [1 ,2 ]
Li, Haiyuan [1 ]
Yu, Haiming [1 ,3 ]
Xie, Qiuju [1 ,4 ]
Dong, Jie [1 ]
Sun, Xiaofei [1 ]
Liu, Honggui [5 ]
Sun, Congcong [6 ]
Li, Bin [7 ]
Zheng, Fang [3 ,8 ]
机构
[1] Northeast Agr Univ, Coll Elect Engn & Informat, Harbin 150030, Peoples R China
[2] Minist Agr & Rural Affairs, Key Lab Equipment & Informatizat Environm Control, Hangzhou 310058, Peoples R China
[3] Minist Agr & Rural Affairs, Key Lab Smart Farming Technol Agr Anim, Wuhan 430070, Peoples R China
[4] Minist Educ, Engn Res Ctr Pig Intelligent Breeding & Farming No, Harbin 150030, Peoples R China
[5] Northeast Agr Univ, Coll Anim Sci & Technol, Harbin 150030, Peoples R China
[6] Wageningen Univ, Agr Biosyst Engn Grp, NL-6700 AA Wageningen, Netherlands
[7] Beijing Acad Agr & Forestry Sci, Intelligent Equipment Res Ctr, Beijing 100097, Peoples R China
[8] Huazhong Agr Univ, Coll Informat, Wuhan 430070, Peoples R China
关键词
multi-sensor fusion; SLAM; feature extraction; loop-closure detection; navigation; SIMULTANEOUS LOCALIZATION; VERSATILE; ROBUST;
D O I
10.3390/s24227214
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
The foundation of robot autonomous movement is to quickly grasp the position and surroundings of the robot, which SLAM technology provides important support for. Due to the complex and dynamic environments, single-sensor SLAM methods often have the problem of degeneracy. In this paper, a multi-sensor fusion SLAM method based on the LVI-SAM framework was proposed. First of all, the state-of-the-art feature detection algorithm SuperPoint is used to extract the feature points from a visual-inertial system, enhancing the detection ability of feature points in complex scenarios. In addition, to improve the performance of loop-closure detection in complex scenarios, scan context is used to optimize the loop-closure detection. Ultimately, the experiment results show that the RMSE of the trajectory under the 05 sequence from the KITTI dataset and the Street07 sequence from the M2DGR dataset are reduced by 12% and 11%, respectively, compared to LVI-SAM. In simulated complex environments of animal farms, the error of this method at the starting and ending points of the trajectory is less than that of LVI-SAM, as well. All these experimental comparison results prove that the method proposed in this paper can achieve higher precision and robustness performance in localization and mapping within complex environments of animal farms.
引用
收藏
页数:17
相关论文
共 31 条
[1]   A review of visual SLAM for robotics: evolution, properties, and future applications [J].
Al-Tawil, Basheer ;
Hempel, Thorsten ;
Abdelrahman, Ahmed ;
Al-Hamadi, Ayoub .
FRONTIERS IN ROBOTICS AND AI, 2024, 11
[2]   A Review on Challenges of Autonomous Mobile Robot and Sensor Fusion Methods [J].
Alatise, Mary B. ;
Hancke, Gerhard P. .
IEEE ACCESS, 2020, 8 :39830-39846
[3]  
[Anonymous], 1994, P 1994 P IEEE C COMP
[4]   Simultaneous localization and mapping (SLAM): Part II [J].
Bailey, Tim ;
Durrant-Whyte, Hugh .
IEEE ROBOTICS & AUTOMATION MAGAZINE, 2006, 13 (03) :108-117
[5]   A METHOD FOR REGISTRATION OF 3-D SHAPES [J].
BESL, PJ ;
MCKAY, ND .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 1992, 14 (02) :239-256
[6]   Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age [J].
Cadena, Cesar ;
Carlone, Luca ;
Carrillo, Henry ;
Latif, Yasir ;
Scaramuzza, Davide ;
Neira, Jose ;
Reid, Ian ;
Leonard, John J. .
IEEE TRANSACTIONS ON ROBOTICS, 2016, 32 (06) :1309-1332
[7]   ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual-Inertial, and Multimap SLAM [J].
Campos, Carlos ;
Elvira, Richard ;
Gomez Rodriguez, Juan J. ;
Montiel, Jose M. M. ;
Tardos, Juan D. .
IEEE TRANSACTIONS ON ROBOTICS, 2021, 37 (06) :1874-1890
[8]   SLAM Overview: From Single Sensor to Heterogeneous Fusion [J].
Chen, Weifeng ;
Zhou, Chengjun ;
Shang, Guangtao ;
Wang, Xiyang ;
Li, Zhenxiong ;
Xu, Chonghui ;
Hu, Kai .
REMOTE SENSING, 2022, 14 (23)
[9]   A Review of Visual-LiDAR Fusion based Simultaneous Localization and Mapping [J].
Debeunne, Cesar ;
Vivet, Damien .
SENSORS, 2020, 20 (07)
[10]   SuperPoint: Self-Supervised Interest Point Detection and Description [J].
DeTone, Daniel ;
Malisiewicz, Tomasz ;
Rabinovich, Andrew .
PROCEEDINGS 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2018, :337-349