Scene recognition for 3D point clouds:a review

被引:2
作者
Hao W. [1 ,2 ]
Zhang W. [1 ,2 ]
Liang W. [1 ,2 ]
Xiao Z. [1 ,2 ]
Jin H. [1 ,2 ]
机构
[1] School of Computer Science and Engineering, Xi'an University of Technology, Xi'an
[2] Shaanxi Key Laboratory for Network Computing and Security Technology, Xi’an
来源
Guangxue Jingmi Gongcheng/Optics and Precision Engineering | 2022年 / 30卷 / 16期
关键词
attention mechanism; deep learning; feature descriptor; graph convolution; point cloud; scene recognition;
D O I
10.37188/OPE.20223016.1988
中图分类号
学科分类号
摘要
Intelligent robots can perform several high-risk tasks such as object detection and epidemic prevention to aid human beings. Research on scene recognition has attracted considerable attention in recent years. Scene recognition aims to obtain high-level semantic features and infer the location of a scene, laying a good foundation for simultaneous localization and mapping, autonomous driving, intelligent robotics, and loop detection. With the rapid development of 3D scanning technology, obtaining point clouds of various scenes using various scanners is extremely convenient. Compared with images, the geometric features of point clouds are invariant to drastic lighting and time changes, thus making the process of localization robust. Therefore, scene recognition of point clouds is one of the most important and fundamental research topics in computer vision. This paper systematically expounds the progress and current situation of scene recognition techniques of point clouds, including traditional methods and deep learning methods. Then, several public datasets for scene recognition are introduced in detail. The recognition rates of various algorithms are summarized. Finally, we note the challenges and future research directions of the scene recognition of point clouds. This study will help researchers in related fields to better understand the research status of scene recognition of point clouds quickly and comprehensively and lay a foundation for a further improvement in the recognition accuracy. © 2022 Chinese Academy of Sciences. All rights reserved.
引用
收藏
页码:1988 / 2005
页数:17
相关论文
共 76 条
[11]  
DUBE R, CRAMARIUC A, DUGAS D, Et al., SegMap:Segment-based mapping and localization using data-driven descriptors[J], The International Journal of Robotics Research, 39, 2/3, pp. 339-355, (2020)
[12]  
TOMONO M., Loop detection for 3D LiDAR SLAM using segment-group matching[J], Advanced Robotics, 34, 23, pp. 1530-1544, (2020)
[13]  
BOSSE M, ZLOT R., Place recognition using keypoint voting in large 3D lidar datasets[C], 2013 IEEE International Conference on Robotics and Automation, pp. 2677-2684
[14]  
CIESLEWSKI T, STUMM E, GAWEL A, Et al., Point cloud descriptors for place recognition using sparse visual information[C], 2016 IEEE International Conference on Robotics and Automation, pp. 4830-4836, (2016)
[15]  
HE L, WANG X L, ZHANG H., M2DP:a novel 3D point cloud descriptor and its application in loop closure detection[C], 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 231-237, (2016)
[16]  
ZHANG W X, XIAO C X., PCAN:3D attention map learning using contextual information for point cloud based retrieval[C], 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR), pp. 12428-12437, (2019)
[17]  
XIA Y, XU Y S, LI S, Et al., SOE-net:a self-attention and orientation encoding network for point cloud based place recognition[C], 2021 IEEE/ CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 11343-11352
[18]  
BARROS T, GARROTE L, PEREIRA R, Et al., AttDLNet:attention-based DL network for 3D LiDAR place recognition[EB/OL], (2021)
[19]  
LIU Z, ZHOU S B, SUO C Z, Et al., LPD-net:3D point cloud learning for large-scale place recognition and environment analysis[C], 2019 IEEE/ CVF International Conference on Computer Vision (ICCV), pp. 2831-2840, (2019)
[20]  
LIU Z, SUO C Z, ZHOU S B, Et al., SeqLPD:sequence matching enhanced loop-closure detection based on large-scale point cloud description for self-driving vehicles[C], 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1218-1223, (2019)