Coherent video generation for multiple hand-held cameras with dynamic foreground

被引:0
|
作者
Fang-Lue Zhang
Connelly Barnes
Hao-Tian Zhang
Junhong Zhao
Gabriel Salas
机构
[1] Victoria University of Wellington,
[2] Adobe Research,undefined
[3] Stanford University,undefined
来源
Computational Visual Media | 2020年 / 6卷
关键词
video editing; smooth temporal transitions; dynamic foreground; multiple cameras; hand-held cameras;
D O I
暂无
中图分类号
学科分类号
摘要
For many social events such as public performances, multiple hand-held cameras may capture the same event. This footage is often collected by amateur cinematographers who typically have little control over the scene and may not pay close attention to the camera. For these reasons, each individually captured video may fail to cover the whole time of the event, or may lose track of interesting foreground content such as a performer. We introduce a new algorithm that can synthesize a single smooth video sequence of moving foreground objects captured by multiple hand-held cameras. This allows later viewers to gain a cohesive narrative experience that can transition between different cameras, even though the input footage may be less than ideal. We first introduce a graph-based method for selecting a good transition route. This allows us to automatically select good cut points for the hand-held videos, so that smooth transitions can be created between the resulting video shots. We also propose a method to synthesize a smooth photorealistic transition video between each pair of hand-held cameras, which preserves dynamic foreground content during this transition. Our experiments demonstrate that our method outperforms previous state-of-the-art methods, which struggle to preserve dynamic foreground content.
引用
收藏
页码:291 / 306
页数:15
相关论文
共 50 条
  • [1] Coherent video generation for multiple hand-held cameras with dynamic foreground
    Fang-Lue Zhang
    Connelly Barnes
    Hao-Tian Zhang
    Junhong Zhao
    Gabriel Salas
    ComputationalVisualMedia, 2020, 6 (03) : 291 - 306
  • [2] Coherent video generation for multiple hand-held cameras with dynamic foreground
    Zhang, Fang-Lue
    Barnes, Connelly
    Zhang, Hao-Tian
    Zhao, Junhong
    Salas, Gabriel
    COMPUTATIONAL VISUAL MEDIA, 2020, 6 (03) : 291 - 306
  • [3] Deep Video Deblurring for Hand-held Cameras
    Su, Shuochen
    Delbracio, Mauricio
    Wang, Jue
    Sapiro, Guillermo
    Heidrich, Wolfgang
    Wang, Oliver
    30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 237 - 246
  • [4] Dynamic Object Localization Using Hand-held Cameras
    Gullapally, Sai Chowdary
    Malireddi, Sri Raghu
    Raman, Shanmuganathan
    2015 TWENTY FIRST NATIONAL CONFERENCE ON COMMUNICATIONS (NCC), 2015,
  • [5] Diminished reality via multiple hand-held cameras
    Jarusirisawad, Songkran
    Saito, Hideo
    2007 FIRST ACM/IEEE INTERNATIONAL CONFERENCE ON DISTRIBUTED SMART CAMERAS, 2007, : 241 - 248
  • [6] HAND-HELD FUNDUS CAMERAS
    YOUNGSON, RM
    AMERICAN JOURNAL OF OPHTHALMOLOGY, 1981, 91 (03) : 414 - 414
  • [7] Using Hand-Held Point and Shoot Video Cameras in Clinical Education
    Stoten, Sharon
    JOURNAL OF CONTINUING EDUCATION IN NURSING, 2011, 42 (02): : 55 - 56
  • [8] ROI-Based Video Stabilization Algorithm for Hand-Held Cameras
    Lee, Dong-bok
    Choi, Ick-hyun
    Song, Byung Cheol
    Lee, Tae Hwan
    2012 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO WORKSHOPS (ICMEW), 2012, : 314 - 318
  • [9] High Dynamic Range Image Reconstruction from Hand-held Cameras
    Lu, Pei-Ying
    Huang, Tz-Huan
    Wu, Meng-Sung
    Cheng, Yi-Ting
    Chuang, Yung-Yu
    CVPR: 2009 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOLS 1-4, 2009, : 509 - 516
  • [10] Video Deblurring for Hand-held Cameras Using Patch-based Synthesis
    Cho, Sunghyun
    Wang, Jue
    Lee, Seungyong
    ACM TRANSACTIONS ON GRAPHICS, 2012, 31 (04):