Depth-Guided Sparse Structure-from-Motion for Movies and TV Shows

被引:4
|
作者
Liu, Sheng [1 ,3 ]
Nie, Xiaohan [2 ]
Hamid, Raffay [2 ]
机构
[1] Univ Buffalo, Buffalo, NY 14260 USA
[2] Amazon Prime Video, Seattle, WA USA
[3] Amazon, Seattle, WA USA
来源
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022) | 2022年
关键词
D O I
10.1109/CVPR52688.2022.01551
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Existing approaches for Structure from Motion (SfM) produce impressive 3-D reconstruction results especially when using imagery captured with large parallax. However; to create engaging video-content in movies and TV shows, the amount by which a camera can be moved while filming a particular shot is often limited. The resulting small-motion parallax between video frames makes standard geometry-based SfM approaches not as effective for movies and TV shows. To address this challenge, we propose a simple yet effective approach that uses single-frame depth-prior obtained from a pretrained network to significantly improve geometry-based SfM for our small-parallax setting. To this end, we first use the depth-estimates of the detected keypoints to reconstruct the point cloud and camera-pose for initial two-view reconstruction. We then perform depth-regularized optimization to register new images and triangulate the new points during incremental reconstruction. To comprehensively evaluate our approach, we introduce a new dataset (StudioSfM) consisting of 130 shots with 21K frames from 15 studio-produced videos that are manually annotated by a professional CG studio. We demonstrate that our approach: (a) significantly improves the quality of 3-D reconstruction for our small-parallax setting, (b) does not cause any degradation for data with large-parallax, and (c) maintains the generalizability and scalability of geometry-based sparse SfM. Our dataset can be obtained at https://github.com/amazon-research/smallbaseline-camera-tracking.
引用
收藏
页码:15959 / 15968
页数:10
相关论文
共 50 条
  • [1] Structure-From-Motion and RGBD Depth Fusion
    Chandrashekar, Akash
    Papadakis, John
    Willis, Andrew
    Gantert, Jamie
    IEEE SOUTHEASTCON 2018, 2018,
  • [2] Sensory memory of illusory depth in structure-from-motion
    Alexander Pastukhov
    Anna Lissner
    Jana Füllekrug
    Jochen Braun
    Attention, Perception, & Psychophysics, 2014, 76 : 123 - 132
  • [3] Sensory memory of illusory depth in structure-from-motion
    Pastukhov, Alexander
    Lissner, Anna
    Fuellekrug, Jana
    Braun, Jochen
    ATTENTION PERCEPTION & PSYCHOPHYSICS, 2014, 76 (01) : 123 - 132
  • [4] Disparate persistence of illusory depth and illusory motion in structure-from-motion displays
    Braun, J.
    Pastukhov, A.
    PERCEPTION, 2012, 41 : 24 - 24
  • [5] Incremental Surface Extraction from Sparse Structure-from-Motion Point Clouds
    Hoppe, Christof
    Klopschitz, Manfred
    Donoser, Michael
    Bischof, Horst
    PROCEEDINGS OF THE BRITISH MACHINE VISION CONFERENCE 2013, 2013,
  • [6] Incremental Solid Modeling from Sparse and Omnidirectional Structure-from-Motion Data
    Litvinov, Vadim
    Lhuillier, Maxime
    PROCEEDINGS OF THE BRITISH MACHINE VISION CONFERENCE 2013, 2013,
  • [7] Manifold surface reconstruction of an environment from sparse Structure-from-Motion data
    Lhuillier, Maxime
    Yu, Shuda
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2013, 117 (11) : 1628 - 1644
  • [8] Depth-Guided Robust Point Cloud Fusion NeRF for Sparse Input Views
    Guo, Shuai
    Wang, Qiuwen
    Gao, Yijie
    Xie, Rong
    Li, Lin
    Zhu, Fang
    Song, Li
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (09) : 8093 - 8106
  • [9] CURVATURE AND DEPTH JUDGMENTS OF THE SAME SIMULATED SHAPE FROM MOTION PARALLAX AND STRUCTURE-FROM-MOTION
    SAIDPOUR, A
    BRAUNSTEIN, ML
    INVESTIGATIVE OPHTHALMOLOGY & VISUAL SCIENCE, 1994, 35 (04) : 1316 - 1316
  • [10] Sparse Depth-Guided Image Enhancement Using Incremental GP with Informative Point Selection
    Yang, Geonmo
    Lee, Juhui
    Kim, Ayoung
    Cho, Younggun
    SENSORS, 2023, 23 (03)