AutoRecon: Automated 3D Object Discovery and Reconstruction

被引:9
作者
Wang, Yuang [1 ]
He, Xingyi [1 ]
Peng, Sida [1 ]
Lin, Haotong [1 ]
Bao, Hujun [1 ]
Zhou, Xiaowei [1 ]
机构
[1] Zhejiang Univ, State Key Lab CAD&CG, Hangzhou, Peoples R China
来源
2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2023年
关键词
D O I
10.1109/CVPR52729.2023.02048
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A fully automated object reconstruction pipeline is crucial for digital content creation. While the area of 3D reconstruction has witnessed profound developments, the removal of background to obtain a clean object model still relies on different forms of manual labor, such as bounding box labeling, mask annotations, and mesh manipulations. In this paper, we propose a novel framework named AutoRecon for the automated discovery and reconstruction of an object from multi-view images. We demonstrate that foreground objects can be robustly located and segmented from SfM point clouds by leveraging self-supervised 2D vision transformer features. Then, we reconstruct decomposed neural scene representations with dense supervision provided by the decomposed point clouds, resulting in accurate object reconstruction and segmentation. Experiments on the DTU, BlendedMVS and CO3D-V2 datasets demonstrate the effectiveness and robustness of AutoRecon. The code and supplementary material are available on the project page: https://zju3dv.github.io/autorecon/.
引用
收藏
页码:21382 / 21391
页数:10
相关论文
共 52 条
[1]  
[Anonymous], 2000, IEEE TPAMI
[2]  
Barron Jonathan T., 2022, CVPR
[3]  
Caron Mathilde, 2021, CVPR
[4]  
Dosovitskiy A., 2021, INT C LEARNING REPRE, P1
[5]  
Engelcke Martin, 2020, ICLR
[6]  
Fu Qiancheng, 2022, ADV NEUR IN
[7]   Towards Internet-scale Multi-view Stereo [J].
Furukawa, Yasutaka ;
Curless, Brian ;
Seitz, Steven M. ;
Szeliski, Richard .
2010 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2010, :1434-1441
[8]  
Greff Klaus, 2019, PR MACH LEARN RES, V97
[9]  
Gropp Amos, 2020, ICML
[10]  
He X., 2022, NeurIPS