A Survey of Indoor 3D Reconstruction Based on RGB-D Cameras

被引:0
|
作者
Zhu, Jinlong [1 ]
Gao, Changbo [1 ]
Sun, Qiucheng [1 ]
Wang, Mingze [1 ]
Deng, Zhengkai [1 ]
机构
[1] Changchun Normal Univ, Sch Comp Sci & Technol, Changchun 130032, Peoples R China
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Cameras; Three-dimensional displays; Heuristic algorithms; Dynamics; Solid modeling; Reconstruction algorithms; Surface treatment; Indoor environment; Neural radiance field; 3D reconstruction; indoor scenes; static scenes; dynamic scenes; deep learning; neural radiance fields; MONOCULAR SLAM; RECOGNITION; LOCALIZATION; ENVIRONMENTS; TRACKING;
D O I
10.1109/ACCESS.2024.3443065
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the advancement of consumer-grade RGB-D cameras, obtaining depth information for indoor 3D spaces has become increasingly accessible. This paper systematically reviews 3D reconstruction algorithms for indoor scenes using these cameras, serving as a reference for future research. We cover reconstruction processes and optimization algorithms for both static and dynamic scenes. Additionally, we discuss commonly used datasets, evaluation metrics, and the performance of various reconstruction algorithms. Findings indicate that the balance between reconstruction quality and speed in static scene reconstruction, as well as deformation, occlusion, and fast motion of objects in dynamic scenes are currently major concerns. Deep learning and Neural Radiance Fields (NeRF) are poised to provide new perspectives and methods to address these challenges.
引用
收藏
页码:112742 / 112766
页数:25
相关论文
共 50 条
  • [31] Exploring RGB-D Cameras for 3D Reconstruction of Cultural Heritage: A New Approach Applied to Brazilian Baroque Sculptures
    Gomes, Leonardo
    Silva, Luciano
    Pereira Bellon, Olga Regina
    ACM JOURNAL ON COMPUTING AND CULTURAL HERITAGE, 2018, 11 (04):
  • [32] Research on 3D reconstruction of fruit tree and fruit recognition and location method based on RGB-D camera
    Mai, Chunyan
    Zheng, Lihua
    Sun, Hong
    Yang, Wei
    Nongye Jixie Xuebao/Transactions of the Chinese Society for Agricultural Machinery, 2015, 46 : 35 - 40
  • [33] Improving 3D Reconstruction Through RGB-D Sensor Noise Modeling
    Maken, Fahira Afzal
    Muthu, Sundaram
    Nguyen, Chuong
    Sun, Changming
    Tong, Jinguang
    Wang, Shan
    Tsuchida, Russell
    Howard, David
    Dunstall, Simon
    Petersson, Lars
    SENSORS, 2025, 25 (03)
  • [34] Multi-target 3D Reconstruction from RGB-D Data
    Gao, Yang
    Yao, Yuan
    Jiang, Yunliang
    PROCEEDINGS OF THE 2ND INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND SOFTWARE ENGINEERING (CSSE 2019), 2019,
  • [35] 3D Instance Segmentation Using Deep Learning on RGB-D Indoor Data
    Yasir, Siddiqui Muhammad
    Sadiq, Amin Muhammad
    Ahn, Hyunsik
    CMC-COMPUTERS MATERIALS & CONTINUA, 2022, 72 (03): : 5777 - 5791
  • [36] DUDMap: 3D RGB-D mapping for dense, unstructured, and dynamic environment
    Hasturk, Ozgur
    Erkmen, Aydan M.
    INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS, 2021, 18 (03)
  • [37] Real-time reconstruction of pipes using RGB-D cameras
    Kim, Dong-Min
    Ahn, JeongHyeon
    Kim, Seung-wook
    Lee, Jongmin
    Kim, Myungho
    Han, JungHyun
    COMPUTER ANIMATION AND VIRTUAL WORLDS, 2024, 35 (01)
  • [38] InpaintFusion: Incremental RGB-D Inpainting for 3D Scenes
    Mori, Shohei
    Erat, Okan
    Broll, Wolfgang
    Saito, Hideo
    Schmalstieg, Dieter
    Kalkofen, Denis
    IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2020, 26 (10) : 2994 - 3007
  • [39] CHUNKFUSION: A LEARNING-BASED RGB-D 3D RECONSTRUCTION FRAMEWORK VIA CHUNK-WISE INTEGRATION
    Guo, Chaozheng
    Zhang, Lin
    Shen, Ying
    Zhou, Yicong
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 3818 - 3822
  • [40] Error Accuracy Estimation of 3D Reconstruction and 3D Camera Pose from RGB-D Data
    Ortiz-Fernandez, Luis E.
    Silva, Bruno M. F.
    Goncalves, Luiz M. G.
    2022 35TH SIBGRAPI CONFERENCE ON GRAPHICS, PATTERNS AND IMAGES (SIBGRAPI 2022), 2022, : 67 - 72