Novel View Synthesis using Feature-preserving Depth Map Resampling

被引:3
|
作者
Chen, Duo [1 ]
Feng, Jie [1 ]
Zhou, Bingfeng [1 ]
机构
[1] Peking Univ, Inst Comp Sci & Technol, Beijing, Peoples R China
来源
PROCEEDINGS OF THE 14TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS (GRAPP), VOL 1 | 2019年
基金
中国国家自然科学基金;
关键词
Novel View Synthesis; Depth Map; Importance Sampling; Image Projection;
D O I
10.5220/0007308701930200
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
In this paper, we present a new method for synthesizing images of a 3D scene at novel viewpoints, based on a set of reference images taken in a casual manner. With such an image set as input, our method first reconstruct a sparse 3D point cloud of the scene, and then it is projected to each reference image to get a set of depth points. Afterwards, an improved error-diffusion sampling method is utilized to generate a sampling point set in each reference image, which includes the depth points and preserves the image features well. Therefore the image can be triangulated on the basis of the sampling point set. Then, we propose a distance metric based on Euclidean distance, color similarity and boundary distribution to propagate depth information from the depth points to the rest of sampling points, and hence a dense depth map can be generated by interpolation in the triangle mesh. Given a desired viewpoint, several closest reference viewpoints are selected, and their colored depth maps are projected to the novel view. Finally, multiple projected images are merged to fill the holes caused by occusion, and result in a complete novel view. Experimental results demonstrate that our method can achieve high quality results for outdoor scenes that contain challenging objects.
引用
收藏
页码:193 / 200
页数:8
相关论文
共 25 条
  • [1] Deeply Supervised Depth Map Super-Resolution as Novel View Synthesis
    Song, Xibin
    Dai, Yuchao
    Qin, Xueying
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2019, 29 (08) : 2323 - 2336
  • [2] NOVEL VIEW SYNTHESIS BASED ON DEPTH MAP LAYERS REPRESENTATION
    Manap, Nurulfajar Abd
    Soraghan, John J.
    2011 3DTV CONFERENCE: THE TRUE VISION - CAPTURE, TRANSMISSION AND DISPLAY OF 3D VIDEO (3DTV-CON), 2011,
  • [3] Adapting stereoscopic movies to the viewing conditions using depth-preserving and artifact-free novel view synthesis
    Devernay, Frederic
    Duchene, Sylvain
    Ramos-Peon, Adrian
    STEREOSCOPIC DISPLAYS AND APPLICATIONS XXII, 2011, 7863
  • [4] Exploiting a Spatial Attention Mechanism for Improved Depth Completion and Feature Fusion in Novel View Synthesis
    Truong, Anh Minh
    Philips, Wilfried
    Veelaert, Peter
    IEEE OPEN JOURNAL OF SIGNAL PROCESSING, 2024, 5 : 204 - 212
  • [5] Depth map misalignment correction and dilation for DIBR view synthesis
    Xu, Xuyuan
    Po, Lai-Man
    Ng, Ka-Ho
    Peng, Litong
    Cheung, Kwok-Wai
    Cheung, Chun-Ho
    Ting, Chi-Wang
    SIGNAL PROCESSING-IMAGE COMMUNICATION, 2013, 28 (09) : 1023 - 1045
  • [6] Depth Map Super-Resolution Considering View Synthesis Quality
    Lei, Jianjun
    Li, Lele
    Yue, Huanjing
    Wu, Feng
    Ling, Nam
    Hou, Chunping
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2017, 26 (04) : 1732 - 1745
  • [7] Multi-View Stereo using Cross-View Depth Map Completion and Row-Column Depth Refinement
    Nair, Nirmal S.
    Nair, Madhu S.
    THIRTEENTH INTERNATIONAL CONFERENCE ON DIGITAL IMAGE PROCESSING (ICDIP 2021), 2021, 11878
  • [8] Novel View Synthesis with Depth Priors Using Neural Radiance Fields and CycleGAN with Attention Transformer
    Qin, Yuxin
    Li, Xinlin
    Zu, Linan
    Jin, Ming Liang
    SYMMETRY-BASEL, 2025, 17 (01):
  • [9] Feature Field Fusion for few-shot novel view synthesis
    Li, Junting
    Zhou, Yanghong
    Fan, Jintu
    Shou, Dahua
    Xu, Sa
    Mok, P. Y.
    IMAGE AND VISION COMPUTING, 2025, 156
  • [10] Robust novel view synthesis from multi-view feature stereo matching priors
    Wang, Jianxin
    Shao, Haijian
    Deng, Xing
    Lian, Shuheng
    MULTIMEDIA SYSTEMS, 2025, 31 (02)