Latent Partition Implicit with Surface Codes for 3D Representation

被引:20
作者
Chen, Chao [1 ]
Liu, Yu-Shen [1 ]
Han, Zhizhong [2 ]
机构
[1] Tsinghua Univ, Sch Software, BNRist, Beijing, Peoples R China
[2] Wayne State Univ, Dept Comp Sci, Detroit, MI USA
来源
COMPUTER VISION - ECCV 2022, PT III | 2022年 / 13663卷
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
Neural implicit representation; Surface codes; Shape reconstruction;
D O I
10.1007/978-3-031-20062-5_19
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep implicit functions have shown remarkable shape modeling ability in various 3D computer vision tasks. One drawback is that it is hard for them to represent a 3D shape as multiple parts. Current solutions learn various primitives and blend the primitives directly in the spatial space, which still struggle to approximate the 3D shape accurately. To resolve this problem, we introduce a novel implicit representation to represent a single 3D shape as a set of parts in the latent space, towards both highly accurate and plausibly interpretable shape modeling. Our insight here is that both the part learning and the part blending can be conducted much easier in the latent space than in the spatial space. We name our method Latent Partition Implicit (LPI), because of its ability of casting the global shape modeling into multiple local part modeling, which partitions the global shape unity. LPI represents a shape as Signed Distance Functions (SDFs) using surface codes. Each surface code is a latent code representing a part whose center is on the surface, which enables us to flexibly employ intrinsic attributes of shapes or additional surface properties. Eventually, LPI can reconstruct both the shape and the parts on the shape, both of which are plausible meshes. LPI is a multi-level representation, which can partition a shape into different numbers of parts after training. LPI can be learned without ground truth signed distances, point normals or any supervision for part partition. LPI outperforms the latest methods under the widely used benchmarks in terms of reconstruction accuracy and modeling interpretability. Our code, data and models are available at https://github.com/chenchao15/LPI.
引用
收藏
页码:322 / 343
页数:22
相关论文
共 94 条
[1]   SAL: Sign Agnostic Learning of Shapes from Raw Data [J].
Atzmon, Matan ;
Lipman, Yaron .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :2562-2571
[2]  
Atzmon Matan, 2021, ICLR
[3]   Neural RGB-D Surface Reconstruction [J].
Azinovic, Dejan ;
Martin-Brualla, Ricardo ;
Goldman, Dan B. ;
Niessner, Matthias ;
Thies, Justus .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, :6280-6291
[4]  
Ben-Shabat Yizhak, 2021, ABS210610811 CORR, P2
[5]   The ball-pivoting algorithm for surface reconstruction [J].
Bernardini, F ;
Mittleman, J ;
Rushmeier, H ;
Silva, C ;
Taubin, G .
IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 1999, 5 (04) :349-359
[6]   Dynamic FAUST: Registering Human Bodies in Motion [J].
Bogo, Federica ;
Romero, Javier ;
Pons-Moll, Gerard ;
Black, Michael J. .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :5573-5582
[7]  
Boulch A., 2022, IEEE C COMPUTER VISI
[8]   Deep Local Shapes: Learning Local SDF Priors for Detailed 3D Reconstruction [J].
Chabra, Rohan ;
Lenssen, Jan E. ;
Ilg, Eddy ;
Schmidt, Tanner ;
Straub, Julian ;
Lovegrove, Steven ;
Newcombe, Richard .
COMPUTER VISION - ECCV 2020, PT XXIX, 2020, 12374 :608-625
[9]   Unsupervised Learning of Fine Structure Generation for 3D Point Clouds by 2D Projection Matching [J].
Chen, Chao ;
Han, Zhizhong ;
Liu, Yu-Shen ;
Zwicker, Matthias .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :12446-12457
[10]   Learning Implicit Fields for Generative Shape Modeling [J].
Chen, Zhiqin ;
Zhang, Hao .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :5932-5941