Learning Hierarchical Adaptive Code Clouds for Neural 3D Shape Representation

被引:0
作者
Lu, Yuanxun [1 ,2 ]
Ji, Xinya [1 ,2 ]
Zhu, Hao [1 ,3 ]
Cao, Xun [1 ,2 ]
机构
[1] Nanjing Univ, State Key Lab Novel Software Technol, Nanjing 210023, Peoples R China
[2] Nanjing Univ, Sch Elect Sci & Engn, Nanjing 210023, Peoples R China
[3] Nanjing Univ, Sch Intelligence Sci & Technol, Suzhou 215163, Peoples R China
基金
中国国家自然科学基金;
关键词
Representation learning; shape analysis; deep implicit function; 3D reconstruction; 3D modeling;
D O I
10.1007/s11633-024-1491-7
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Neural implicit representation (NIR) has attracted significant attention in 3D shape representation for its efficiency, generalizability, and flexibility compared with traditional explicit representations. Previous works usually parameterize shapes with neural feature grids/volumes, which prove to be inefficient for the discrete position constraints of the representations. While recent advances make it possible to optimize continuous positions for the latent codes, they still lack self-adaptability to represent various kinds of shapes well. In this paper, we introduce a hierarchical adaptive code cloud (HACC) model to achieve an accurate and compact implicit 3D shape representation. Specifically, we begin by assigning adaptive influence fields and dynamic positions to latent codes, which are optimizable during training, and propose an adaptive aggregation function to fuse the contributions of candidate latent codes with respect to query points. In addition, these basic modules are stacked hierarchically with gradually narrowing influence field thresholds and, therefore, heuristically forced to focus on capturing finer structures at higher levels. These formulations greatly improve the distribution and effectiveness of local latent codes and reconstruct shapes from coarse to fine with high accuracy. Extensive qualitative and quantitative evaluations both on single-shape reconstruction and large-scale dataset representation tasks demonstrate the superiority of our method over state-of-the-art approaches.
引用
收藏
页码:304 / 323
页数:20
相关论文
共 63 条
[1]  
Atzmon M., 2021, P 9 INT C LEARNING R
[2]   SAL: Sign Agnostic Learning of Shapes from Raw Data [J].
Atzmon, Matan ;
Lipman, Yaron .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :2562-2571
[3]   Deep Local Shapes: Learning Local SDF Priors for Detailed 3D Reconstruction [J].
Chabra, Rohan ;
Lenssen, Jan E. ;
Ilg, Eddy ;
Schmidt, Tanner ;
Straub, Julian ;
Lovegrove, Steven ;
Newcombe, Richard .
COMPUTER VISION - ECCV 2020, PT XXIX, 2020, 12374 :608-625
[4]  
Chang Angel X., 2015, arXiv
[5]   Latent Partition Implicit with Surface Codes for 3D Representation [J].
Chen, Chao ;
Liu, Yu-Shen ;
Han, Zhizhong .
COMPUTER VISION - ECCV 2022, PT III, 2022, 13663 :322-343
[6]   Learning Continuous Image Representation with Local Implicit Image Function [J].
Chen, Yinbo ;
Liu, Sifei ;
Wang, Xiaolong .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :8624-8634
[7]   Multiresolution Deep Implicit Functions for 3D Shape Representation [J].
Chen, Zhang ;
Zhang, Yinda ;
Genova, Kyle ;
Fanello, Sean ;
Bouaziz, Sofien ;
Hane, Christian ;
Du, Ruofei ;
Keskin, Cem ;
Funkhouser, Thomas ;
Tang, Danhang .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :13067-13076
[8]   Neural Marching Cubes [J].
Chen, Zhiqin ;
Zhang, Hao .
ACM TRANSACTIONS ON GRAPHICS, 2021, 40 (06)
[9]   Learning Implicit Fields for Generative Shape Modeling [J].
Chen, Zhiqin ;
Zhang, Hao .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :5932-5941
[10]   Implicit Functions in Feature Space for 3D Shape Reconstruction and Completion [J].
Chibane, Julian ;
Alldieck, Thiemo ;
Pons-Moll, Gerard .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :6968-6979