Real-time representation and rendering of high-resolution 3D light field based on texture-enhanced optical flow prediction

被引:2
作者
Li, Ningchi [1 ]
Yu, Xunbo [1 ]
Gao, Xin [1 ]
Yan, Binbin [1 ]
Li, Donghu [1 ]
Hong, Jianhao [1 ]
Tong, Yixiang [1 ]
Wang, Yue [1 ]
Hu, Yunfan [1 ]
Ning, Chenyu [1 ]
He, Jinhong [1 ]
Ji, Luyu [1 ]
Sang, Xinzhu [1 ]
机构
[1] Beijing Univ Posts & Telecommun BUPT, State Key Lab Informat Photon & Opt Commun, Beijing 100876, Peoples R China
来源
OPTICS EXPRESS | 2024年 / 32卷 / 15期
关键词
VIEW SYNTHESIS; DISPLAY;
D O I
10.1364/OE.529378
中图分类号
O43 [光学];
学科分类号
070207 ; 0803 ;
摘要
Three-dimensional (3D) light field displays can provide an immersive visual perception and have attracted widespread attention, especially in 3D light field communications, where 3D light field displays can provide face-to-face communication experiences. However, due to limitations in 3D reconstruction and dense views rendering efficiency, generating high-quality 3D light field content in real-time remains a challenge. Traditional 3D light field capturing and reconstruction methods suffer from high reconstruction complexity and low rendering efficiency. Here, a Real-time optical flow representation for the high-resolution light field is proposed. Based on the principle of 3D light field display, we use optical flow to ray trace and multiplex sparse view pixels. We simultaneously synthesize 3D light field images during the real-time interpolation process of views. In addition, we built a complete capturing-display system to verify the effectiveness of our method. The experiments' results show that the proposed method can synthesize 8 K 3D light field videos containing 100 views in real-time. The PSNR of the virtual views is around 32 dB and SSIM is over 0.99, and the rendered frame rate is 32 fps. Qualitative experimental results show that this method can be used for high-resolution 3D light field communication.
引用
收藏
页码:26478 / 26491
页数:14
相关论文
共 45 条
  • [1] Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields
    Barron, Jonathan T.
    Mildenhall, Ben
    Tancik, Matthew
    Hedman, Peter
    Martin-Brualla, Ricardo
    Srinivasan, Pratul P.
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 5835 - 5844
  • [3] Virtual view synthesis for 3D light-field display based on scene tower blending
    Chen, Duo
    Sang, Xinzhu
    Wang, Peng
    Yu, Xunbo
    Gao, Xin
    Yan, Binbin
    Wang, Huachun
    Qi, Shuai
    Ye, Xiaoqian
    [J]. OPTICS EXPRESS, 2021, 29 (05) : 7866 - 7884
  • [4] Dense-view synthesis for three-dimensional light-field display based on unsupervised learning
    Chen, Duo
    Sang, Xinzhu
    Wang, Peng
    Yu, Xunbo
    Yan, Binbin
    Wang, Huachun
    Ning, Mengyang
    Qi, Shuai
    Ye, Mown
    [J]. OPTICS EXPRESS, 2019, 27 (17) : 24624 - 24641
  • [5] Fast virtual view synthesis for an 8K 3D light-field display based on cutoff-NeRF and 3D voxel rendering
    Chen, Shuo
    Yan, Binbin
    Sang, Xinzhu
    Chen, Duo
    Wang, Peng
    Yang, Zeyuan
    Guo, Xiao
    Zhong, Chongli
    [J]. OPTICS EXPRESS, 2022, 30 (24) : 44201 - 44217
  • [6] Cserkaszky A, 2018, EUR W VIS INF PROCES
  • [7] DaViT: Dual Attention Vision Transformers
    Ding, Mingyu
    Xiao, Bin
    Codella, Noel
    Luo, Ping
    Wang, Jingdong
    Yuan, Lu
    [J]. COMPUTER VISION, ECCV 2022, PT XXIV, 2022, 13684 : 74 - 92
  • [8] FlowNet: Learning Optical Flow with Convolutional Networks
    Dosovitskiy, Alexey
    Fischer, Philipp
    Ilg, Eddy
    Haeusser, Philip
    Hazirbas, Caner
    Golkov, Vladimir
    van der Smagt, Patrick
    Cremers, Daniel
    Brox, Thomas
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 2758 - 2766
  • [9] Lagrangian motion magnification with double sparse optical flow decomposition
    Flotho, Philipp
    Heiss, Cosmas
    Steidl, Gabriele
    Strauss, Daniel J.
    [J]. FRONTIERS IN APPLIED MATHEMATICS AND STATISTICS, 2023, 9
  • [10] Analysis of the relationship between display depth and 3D image definition in light-field display from visual perspective
    Fu, Bangshao
    Yu, Xunbo
    Gao, Xin
    Xie, Xinhui
    Pie, Xiangyu
    Dong, Haoxiang
    Shen, Sheng
    Sang, Xinzhu
    Yan, Binbin
    [J]. DISPLAYS, 2023, 80