Video Depth-From-Defocus

被引:7
|
作者
Kim, Hyeongwoo [1 ]
Richardt, Christian [2 ]
Theobalt, Christian [3 ]
机构
[1] Max Planck Inst Informat, Saarbrucken, Germany
[2] Intel Visual Comp Inst, Saarbrucken, Germany
[3] Univ Bath, Bath BA2 7AY, Avon, England
来源
PROCEEDINGS OF 2016 FOURTH INTERNATIONAL CONFERENCE ON 3D VISION (3DV) | 2016年
基金
英国工程与自然科学研究理事会;
关键词
IMAGE; BLUR; PHOTOGRAPHY; CAMERA;
D O I
10.1109/3DV.2016.46
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Many compelling video post-processing effects, in particular aesthetic focus editing and refocusing effects, are feasible if per-frame depth information is available. Existing computational methods to capture RGB and depth either purposefully modify the optics (coded aperture, light-field imaging), or employ active RGB-D cameras. Since these methods are less practical for users with normal cameras, we present an algorithm to capture all-in-focus RGB-D video of dynamic scenes with an unmodified commodity video camera. Our algorithm turns the often unwanted defocus blur into a valuable signal. The input to our method is a video in which the focus plane is continuously moving back and forth during capture, and thus defocus blur is provoked and strongly visible. This can be achieved by manually turning the focus ring of the lens during recording. The core algorithmic ingredient is a new video-based depth-from-defocus algorithm that computes space-time-coherent depth maps, deblurred all-in-focus video, and the focus distance for each frame. We extensively evaluate our approach, and show that it enables compelling video post-processing effects, such as different types of refocusing.
引用
收藏
页码:370 / 379
页数:10
相关论文
共 50 条
  • [21] Depth-based defocus map estimation using off-axis apertures
    Lee, Eunsung
    Chae, Eunjung
    Cheong, Hejin
    Jeon, Semi
    Paik, Joonki
    OPTICS EXPRESS, 2015, 23 (17): : 21958 - 21971
  • [22] Defocus Map Estimation From a Single Image Based on Two-Parameter Defocus Model
    Liu, Shaojun
    Zhou, Fei
    Liao, Qingmin
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2016, 25 (12) : 5943 - 5956
  • [23] A framework for estimating relative depth in video
    Rzeszutek, Richard
    Androutsos, Dimitrios
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2015, 133 : 15 - 29
  • [24] Defocus Blur Detection and Estimation from Imaging Sensors
    Li, Jinyang
    Liu, Zhijing
    Yao, Yong
    SENSORS, 2018, 18 (04)
  • [25] Omnidirectional Multicamera Video Stitching Using Depth Maps
    Bosch, Josep
    Istenic, Klemen
    Gracias, Nuno
    Garcia, Rafael
    Ridao, Pere
    IEEE JOURNAL OF OCEANIC ENGINEERING, 2020, 45 (04) : 1337 - 1352
  • [26] Depth-Based Multiview Distributed Video Coding
    Petrazzuoli, Giovanni
    Maugey, Thomas
    Cagnazzo, Marco
    Pesquet-Popescu, Beatrice
    IEEE TRANSACTIONS ON MULTIMEDIA, 2014, 16 (07) : 1834 - 1848
  • [27] Defocus-based three-dimensional particle location with extended depth of field via color coding
    Cao, Zhaolou
    Zhai, Chunjie
    APPLIED OPTICS, 2019, 58 (17) : 4734 - 4739
  • [28] A Refined Weighted Mode Filtering Approach for Depth Video Enhancement
    Zuo, Xinxin
    Zheng, Jiangbin
    2013 INTERNATIONAL CONFERENCE ON VIRTUAL REALITY AND VISUALIZATION (ICVRV 2013), 2013, : 138 - 144
  • [29] Layer Assignment Based on Depth Data Distribution for Multiview-Plus-Depth Scalable Video Coding
    Karlsson, Linda S.
    Sjostrom, Marten
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2011, 21 (06) : 742 - 754
  • [30] Spatial error concealment for intra-coded depth maps in multiview video-plus-depth
    Amado Assuncao, Pedro A.
    Marcelino, Sylvain
    Soares, Salviano
    de Faria, Sergio M. M.
    MULTIMEDIA TOOLS AND APPLICATIONS, 2017, 76 (12) : 13835 - 13858