Novel View Synthesis via Depth-guided Skip Connections

被引:8
|
作者
Hou, Yuxin [1 ]
Solin, Arno [1 ]
Kannala, Juho [1 ]
机构
[1] Aalto Univ, Dept Comp Sci, Espoo, Finland
基金
芬兰科学院;
关键词
D O I
10.1109/WACV48630.2021.00316
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We introduce a principled approach for synthesizing new views of a scene given a single source image. Previous methods for novel view synthesis can be divided into image-based rendering methods (e.g., flow prediction) or pixel generation methods. Flow predictions enable the target view to re-use pixels directly, but can easily lead to distorted results. Directly regressing pixels can produce structurally consistent results but generally suffer from the lack of low-level details. In this paper, we utilize an encoder-decoder architecture to regress pixels of a target view. In order to maintain details, we couple the decoder aligned feature maps with skip connections, where the alignment is guided by predicted depth map of the target view. Our experimental results show that our method does not suffer from distortions and successfully preserves texture details with aligned skip connections.
引用
收藏
页码:3118 / 3127
页数:10
相关论文
共 50 条
  • [31] Unsupervised Semantic Segmentation Through Depth-Guided Feature Correlation and Sampling
    Sick, Leon
    Engel, Dominik
    Hermosilla, Pedro
    Ropinski, Timo
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2024, 2024, : 3637 - 3646
  • [32] Design of multispectral array imaging system based on depth-guided network
    Yan, Gangqi
    Song, Yansong
    Zhang, Bo
    Liang, Zonglin
    Piao, Mingxu
    Dong, Keyan
    Zhang, Lei
    Liu, Tianci
    Wang, Yanbai
    Li, Xinghang
    Hu, Wenyi
    OPTICS AND LASERS IN ENGINEERING, 2024, 175
  • [33] Self-Guided Novel View Synthesis via Elastic Displacement Network
    Liu, Yicun
    Zhang, Jiawei
    Ma, Ye
    Ren, Jimmy S.
    2020 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2020, : 164 - 173
  • [34] A real-time semi-dense depth-guided depth completion network
    JieJie Xu
    Yisheng Zhu
    Wenqing Wang
    Guangcan Liu
    The Visual Computer, 2024, 40 : 87 - 97
  • [35] DXYW: a depth-guided multi-channel edge detection model
    Chuan Lin
    Qu Wang
    Shujuan Wan
    Signal, Image and Video Processing, 2023, 17 : 481 - 489
  • [36] Multi-View Stereo and Depth Priors Guided NeRF for View Synthesis
    Deng, Wang
    Zhang, Xuetao
    Guo, Yu
    Lu, Zheng
    2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 3922 - 3928
  • [37] Depth-Guided Disocclusion Inpainting of Synthesized RGB-D Images
    Buyssens, Pierre
    Le Meur, Olivier
    Daisy, Maxime
    Tschumperle, David
    Lezoray, Olivier
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2017, 26 (02) : 525 - 538
  • [38] Depth-Guided Dense Dynamic Filtering Network for Bokeh Effect Rendering
    Purohit, Kuldeep
    Suin, Maitreya
    Kandula, Praveen
    Ambasamudram, Rajagopalan
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW), 2019, : 3417 - 3426
  • [39] Going From RGB to RGBD Saliency: A Depth-Guided Transformation Model
    Cong, Runmin
    Lei, Jianjun
    Fu, Huazhu
    Hou, Junhui
    Huang, Qingming
    Kwong, Sam
    IEEE TRANSACTIONS ON CYBERNETICS, 2020, 50 (08) : 3627 - 3639
  • [40] Depth-Guided Dehazing Network for Long-Range Aerial Scenes
    Wang, Yihu
    Zhao, Jilin
    Yao, Liangliang
    Fu, Changhong
    REMOTE SENSING, 2024, 16 (12)