View Synthesis of Dynamic Scenes Based on Deep 3D Mask Volume

被引:0
作者
Lin, Kai-En [1 ]
Yang, Guowei [1 ]
Xiao, Lei [2 ]
Liu, Feng [3 ]
Ramamoorthi, Ravi [1 ]
机构
[1] Univ Calif San Diego, CSE Dept, La Jolla, CA 92037 USA
[2] Meta, Real Labs Res, Redmond, WA 98052 USA
[3] Portland State Univ, Dept Comp Sci, Portland, OR 97207 USA
关键词
Videos; Cameras; Three-dimensional displays; Heuristic algorithms; Rendering (computer graphics); Training; Synchronization; Computer vision; view synthesis;
D O I
10.1109/TPAMI.2023.3289333
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Image view synthesis has seen great success in reconstructing photorealistic visuals, thanks to deep learning and various novel representations. The next key step in immersive virtual experiences is view synthesis of dynamic scenes. However, several challenges exist due to the lack of high-quality training datasets, and the additional time dimension for videos of dynamic scenes. To address this issue, we introduce a multi-view video dataset, captured with a custom 10-camera rig in 120FPS. The dataset contains 96 high-quality scenes showing various visual effects and human interactions in outdoor scenes. We develop a new algorithm, Deep 3D Mask Volume, which enables temporally-stable view extrapolation from binocular videos of dynamic scenes, captured by static cameras. Our algorithm addresses the temporal inconsistency of disocclusions by identifying the error-prone areas with a 3D mask volume, and replaces them with static background observed throughout the video. Our method enables manipulation in 3D space as opposed to simple 2D masks, We demonstrate better temporal stability than frame-by-frame static view synthesis methods, or those that use 2D masks. The resulting view synthesis videos show minimal flickering artifacts and allow for larger translational movements.
引用
收藏
页码:13250 / 13264
页数:15
相关论文
共 57 条
[1]   4D Visualization of Dynamic Events from Unconstrained Multi-View Videos [J].
Bansal, Aayush ;
Vo, Minh ;
Sheikh, Yaser ;
Ramanan, Deva ;
Narasimhan, Srinivasa .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :5365-5374
[2]   X-Fields: Implicit Neural View-, Light- and Time-Image Interpolation [J].
Bemana, Mojtaba ;
Myszkowski, Karol ;
Seidel, Hans-Peter ;
Ritschel, Tobias .
ACM TRANSACTIONS ON GRAPHICS, 2020, 39 (06)
[3]  
Bi S, 2020, Arxiv, DOI [arXiv:2008.03824, 10.48550/arXiv.2008.03824]
[4]  
Bi Sai, 2020, Deep reflectance volumes: Relightable reconstructions from multi-view photometric images
[5]  
Blender Online Community, 2020, Blender-A 3D modelling and rendering package, blender foundation.
[6]  
Bradski G, 2000, DR DOBBS J, V25, P120
[7]   Immersive Light Field Video with a Layered Mesh Representation [J].
Broxton, Michael ;
Flynn, John ;
Overbeck, Ryan ;
Erickson, Daniel ;
Hedman, Peter ;
Duvall, Matthew ;
Dourgarian, Jason ;
Busch, Jay ;
Whalen, Matt ;
Debevec, Paul .
ACM TRANSACTIONS ON GRAPHICS, 2020, 39 (04)
[8]  
Buehler C, 2001, COMP GRAPH, P425, DOI 10.1145/383259.383309
[9]   Photographic Image Synthesis with Cascaded Refinement Networks [J].
Chen, Qifeng ;
Koltun, Vladlen .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :1520-1529
[10]   Unstructured Light Fields [J].
Davis, Abe ;
Levoy, Marc ;
Durand, Fredo .
COMPUTER GRAPHICS FORUM, 2012, 31 (02) :305-314