Geometric Warping Error Aware Spatial-Temporal Enhancement for DIBR Oriented View Synthesis

被引:0
|
作者
Wang, Dewei [1 ]
Peng, Rui [1 ]
Li, Shuai [1 ]
Gao, Yanbo [2 ,3 ]
Li, Chuankun [4 ]
机构
[1] Shandong Univ, Sch Control Sci & Engn, Jinan 250100, Peoples R China
[2] Shandong Univ, Sch Software, Jinan 250100, Peoples R China
[3] Shandong Univ, WeiHai Res Inst Ind Technol, Jinan 250100, Peoples R China
[4] North Univ China, Taiyuan 030051, Peoples R China
基金
中国国家自然科学基金;
关键词
Three-dimensional displays; Convolution; Distortion; Merging; Deep learning; Cameras; Superresolution; Depth-Image-based Rendering (DIBR); geometric warping error (GWE); temporal enhancement; view synthesis;
D O I
10.1109/LSP.2024.3388995
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Depth-Image-based Rendering (DIBR) oriented view synthesis is a crucial technique of virtual view synthesis in 3D video. It generates virtual viewpoint images using a few (usually two) reference viewpoints with DIBR based 3D warping. Geometric warping error, often overlooked by existing methods, is inevitably encountered in the 3D warping process due to the integer pixel representationx of images. To reduce the geometric warping error (GWE) in virtual view synthesis, a GWE aware spatial-temporal enhancement framework is proposed in this letter to improve the DIBR oriented view synthesis. In the spatial domain, a GWE aware view warping and blending enhancement module is developed to adaptively improve the warped views and merging the warped views from different reference viewpoints by considering and utilizing the preserved GWE. In the temporal domain, a refinement and temporal enhancement module is designed for refining and enhancing the blended view with temporal information. Experiments conducted on diverse datasets demonstrate the effectiveness of addressing the geometric warping error and using the temporal information, while further ablation study confirms the effectiveness of each proposed module.
引用
收藏
页码:1219 / 1223
页数:5
相关论文
共 28 条
  • [21] Video Encoding Enhancement via Content-Aware Spatial and Temporal Super-Resolution
    Wei, Yiying
    Amirpour, Hadi
    Telili, Ahmed
    Hamidouche, Wassim
    Lu, Guo
    Timmerer, Christian
    32ND EUROPEAN SIGNAL PROCESSING CONFERENCE, EUSIPCO 2024, 2024, : 681 - 685
  • [22] Spatio-Temporal Coherence for 3-D View Synthesis with Curve-Based Disparity Warping
    Wang, Hao
    Zhang, Xiaopeng
    Xiong, Hongkai
    2014 IEEE VISUAL COMMUNICATIONS AND IMAGE PROCESSING CONFERENCE, 2014, : 177 - 180
  • [23] BEVNav: Robot Autonomous Navigation via Spatial-Temporal Contrastive Learning in Bird's-Eye View
    Jiang, Jiahao
    Yang, Yuxiang
    Deng, Yingqi
    Ma, Chenlong
    Zhang, Jing
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (12): : 10796 - 10802
  • [24] Context-aware Spatial-Temporal Neural Network for Citywide Crowd Flow Prediction via Modeling Long-range Spatial Dependency
    Feng, Jie
    Li, Yong
    Lin, Ziqian
    Rong, Can
    Sun, Funing
    Guo, Diansheng
    Jin, Depeng
    ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, 2022, 16 (03)
  • [25] Fine-grained video super-resolution via spatial-temporal learning and image detail enhancement
    Yeh, Chia -Hung
    Yang, Hsin-Fu
    Lin, Yu -Yang
    Huang, Wan-Jen
    Tsai, Feng-Hsu
    Kang, Li - Wei
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 131
  • [26] Accurate 3D Reconstruction of Dynamic Objects by Spatial-Temporal Multiplexing and Motion-Induced Error Elimination
    Sui, Congying
    He, Kejing
    Lyu, Congyi
    Liu, Yun-Hui
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 : 2106 - 2121
  • [27] View-spatial-temporal post-refinement for view synthesis in 3D video systems
    Zhu, Linwei
    Zhang, Yun
    Yu, Mei
    Jiang, Gangyi
    Kwong, Sam
    SIGNAL PROCESSING-IMAGE COMMUNICATION, 2013, 28 (10) : 1342 - 1357
  • [28] A depth-aware geometric fusion based view synthesis method for sparse RGB-D input
    Zhao, Ziyu
    Lu, Xiaobo
    Zhou, Sifan
    Chiang, Patrick Yin
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 270