Novel View Synthesis via Depth-guided Skip Connections

被引:8
|
作者
Hou, Yuxin [1 ]
Solin, Arno [1 ]
Kannala, Juho [1 ]
机构
[1] Aalto Univ, Dept Comp Sci, Espoo, Finland
基金
芬兰科学院;
关键词
D O I
10.1109/WACV48630.2021.00316
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We introduce a principled approach for synthesizing new views of a scene given a single source image. Previous methods for novel view synthesis can be divided into image-based rendering methods (e.g., flow prediction) or pixel generation methods. Flow predictions enable the target view to re-use pixels directly, but can easily lead to distorted results. Directly regressing pixels can produce structurally consistent results but generally suffer from the lack of low-level details. In this paper, we utilize an encoder-decoder architecture to regress pixels of a target view. In order to maintain details, we couple the decoder aligned feature maps with skip connections, where the alignment is guided by predicted depth map of the target view. Our experimental results show that our method does not suffer from distortions and successfully preserves texture details with aligned skip connections.
引用
收藏
页码:3118 / 3127
页数:10
相关论文
共 50 条
  • [1] Efficient Depth-Guided Urban View Synthesis
    Miao, Sheng
    Huang, Jiaxin
    Bai, Dongfeng
    Qiu, Weichao
    Liu, Bingbing
    Geiger, Andreas
    Liao, Yiyi
    COMPUTER VISION - ECCV 2024, PT XXX, 2025, 15088 : 90 - 107
  • [2] NOVEL VIEW SYNTHESIS WITH SKIP CONNECTIONS
    Kim, Juhyeon
    Kim, Young Min
    2020 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2020, : 1616 - 1620
  • [3] Enhancing View Synthesis with Depth-Guided Neural Radiance Fields and Improved Depth Completion
    Wang, Bojun
    Zhang, Danhong
    Su, Yixin
    Zhang, Huajun
    SENSORS, 2024, 24 (06)
  • [4] Depth-guided view synthesis for light field reconstruction from a single image
    Zhou, Wenhui
    Liu, Gaomin
    Shi, Jiangwei
    Zhang, Hua
    Dai, Guojun
    IMAGE AND VISION COMPUTING, 2020, 95
  • [5] Depth-Guided Patch-Based Disocclusion Filling for View Synthesis via Markov Random Field Modelling
    Ruzic, Tijana
    Jovanov, Ljubomir
    Luong, Hiep Quang
    Pizurica, Aleksandra
    Philips, Wilfried
    2014 8TH INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING AND COMMUNICATION SYSTEMS (ICSPCS), 2014,
  • [6] PROBABILISTIC DEPTH-GUIDED MULTI-VIEW IMAGE DENOISING
    Lee, Chul
    Kim, Chang-Su
    Lee, Sang-Uk
    2013 20TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP 2013), 2013, : 905 - 908
  • [7] Depth-guided saliency detection via boundary information
    Zhou, Xiaofei
    Wen, Hongfa
    Shi, Ran
    Yin, Haibing
    Yan, Chenggang
    IMAGE AND VISION COMPUTING, 2020, 103 (103)
  • [8] Depth-guided asymmetric CycleGAN for rain synthesis and image deraining
    Qi, Yinhe
    Zhang, Huanrong
    Jin, Zhi
    Liu, Wanquan
    MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (25) : 35935 - 35952
  • [9] Depth-Guided Deep Video Inpainting
    Li, Shibo
    Zhu, Shuyuan
    Ge, Yao
    Zeng, Bing
    Imran, Muhammad Ali
    Abbasi, Qammer H.
    Cooper, Jonathan
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 5860 - 5871
  • [10] Depth-Guided NeRF Training via Earth Mover's Distance
    Rau, Anita
    Aklilu, Josiah
    Holsinger, F. Christopher
    Yeung-Levy, Serena
    COMPUTER VISION - ECCV 2024, PT LXIV, 2025, 15122 : 1 - 17