Video Depth-From-Defocus

被引:7
|
作者
Kim, Hyeongwoo [1 ]
Richardt, Christian [2 ]
Theobalt, Christian [3 ]
机构
[1] Max Planck Inst Informat, Saarbrucken, Germany
[2] Intel Visual Comp Inst, Saarbrucken, Germany
[3] Univ Bath, Bath BA2 7AY, Avon, England
来源
PROCEEDINGS OF 2016 FOURTH INTERNATIONAL CONFERENCE ON 3D VISION (3DV) | 2016年
基金
英国工程与自然科学研究理事会;
关键词
IMAGE; BLUR; PHOTOGRAPHY; CAMERA;
D O I
10.1109/3DV.2016.46
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Many compelling video post-processing effects, in particular aesthetic focus editing and refocusing effects, are feasible if per-frame depth information is available. Existing computational methods to capture RGB and depth either purposefully modify the optics (coded aperture, light-field imaging), or employ active RGB-D cameras. Since these methods are less practical for users with normal cameras, we present an algorithm to capture all-in-focus RGB-D video of dynamic scenes with an unmodified commodity video camera. Our algorithm turns the often unwanted defocus blur into a valuable signal. The input to our method is a video in which the focus plane is continuously moving back and forth during capture, and thus defocus blur is provoked and strongly visible. This can be achieved by manually turning the focus ring of the lens during recording. The core algorithmic ingredient is a new video-based depth-from-defocus algorithm that computes space-time-coherent depth maps, deblurred all-in-focus video, and the focus distance for each frame. We extensively evaluate our approach, and show that it enables compelling video post-processing effects, such as different types of refocusing.
引用
收藏
页码:370 / 379
页数:10
相关论文
共 50 条
  • [1] Rational-operator-based depth-from-defocus approach to scene reconstruction
    Li, Ang
    Staunton, Richard
    Tjahjadi, Tardi
    JOURNAL OF THE OPTICAL SOCIETY OF AMERICA A-OPTICS IMAGE SCIENCE AND VISION, 2013, 30 (09) : 1787 - 1795
  • [2] Uniting Stereo and Depth-from-Defocus: A Thin Lens-based Variational Framework for Multiview Reconstruction
    Friedlander, Robert D.
    Yang, Huizong
    Yezzi, Anthony J.
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, : 4401 - 4410
  • [3] Defocus Discrimination in Video: Motion in Depth
    Petrella, Vincent A.
    Labute, Simon
    Langer, Michael S.
    Kry, Paul G.
    I-PERCEPTION, 2017, 8 (06):
  • [4] Video-rate calculation of depth from defocus on a FPGA
    Raj, Alex Noel Joseph
    Staunton, Richard C.
    JOURNAL OF REAL-TIME IMAGE PROCESSING, 2018, 14 (02) : 469 - 480
  • [5] ROTATING CODED APERTURE FOR DEPTH FROM DEFOCUS
    Yang, Jingyu
    Ma, Jinlong
    Jiang, Bin
    2016 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING PROCEEDINGS, 2016, : 1726 - 1730
  • [6] Spray drop measurements using depth from defocus
    Zhou Wu
    Tropea, Cameron
    Chen Benting
    Zhang Yukun
    Luo Xu
    Cai Xiaoshu
    MEASUREMENT SCIENCE AND TECHNOLOGY, 2020, 31 (07)
  • [7] Coding depth perception from image defocus
    Super, Hans
    Romeo, August
    VISION RESEARCH, 2014, 105 : 199 - 203
  • [8] An efficient method for monocular depth from defocus
    Leroy, Jean-Vincent
    Simon, Thierry
    Descenes, Francois
    PROCEEDINGS ELMAR-2008, VOLS 1 AND 2, 2008, : 133 - +
  • [9] Rational filter design for depth from defocus
    Raj, Alex Noel Joseph
    Staunton, Richard C.
    PATTERN RECOGNITION, 2012, 45 (01) : 198 - 207
  • [10] Fast and accurate auto-focusing algorithm based on the combination of depth from focus and improved depth from defocus
    Zhang, Xuedian
    Liu, Zhaoqing
    Jiang, Minshan
    Chang, Min
    OPTICS EXPRESS, 2014, 22 (25): : 31237 - 31247