Video Depth-From-Defocus

被引:7
|
作者
Kim, Hyeongwoo [1 ]
Richardt, Christian [2 ]
Theobalt, Christian [3 ]
机构
[1] Max Planck Inst Informat, Saarbrucken, Germany
[2] Intel Visual Comp Inst, Saarbrucken, Germany
[3] Univ Bath, Bath BA2 7AY, Avon, England
来源
PROCEEDINGS OF 2016 FOURTH INTERNATIONAL CONFERENCE ON 3D VISION (3DV) | 2016年
基金
英国工程与自然科学研究理事会;
关键词
IMAGE; BLUR; PHOTOGRAPHY; CAMERA;
D O I
10.1109/3DV.2016.46
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Many compelling video post-processing effects, in particular aesthetic focus editing and refocusing effects, are feasible if per-frame depth information is available. Existing computational methods to capture RGB and depth either purposefully modify the optics (coded aperture, light-field imaging), or employ active RGB-D cameras. Since these methods are less practical for users with normal cameras, we present an algorithm to capture all-in-focus RGB-D video of dynamic scenes with an unmodified commodity video camera. Our algorithm turns the often unwanted defocus blur into a valuable signal. The input to our method is a video in which the focus plane is continuously moving back and forth during capture, and thus defocus blur is provoked and strongly visible. This can be achieved by manually turning the focus ring of the lens during recording. The core algorithmic ingredient is a new video-based depth-from-defocus algorithm that computes space-time-coherent depth maps, deblurred all-in-focus video, and the focus distance for each frame. We extensively evaluate our approach, and show that it enables compelling video post-processing effects, such as different types of refocusing.
引用
收藏
页码:370 / 379
页数:10
相关论文
共 50 条
  • [41] Joint Bit Allocation and Rate Control for Coding Multi-View Video Plus Depth Based 3D Video
    Shao, Feng
    Jiang, Gangyi
    Lin, Weishi
    Yu, Mei
    Dai, Qionghai
    IEEE TRANSACTIONS ON MULTIMEDIA, 2013, 15 (08) : 1843 - 1854
  • [42] A Novel Depth-Based Virtual View Synthesis Method for Free Viewpoint Video
    Ahn, Ilkoo
    Kim, Changick
    IEEE TRANSACTIONS ON BROADCASTING, 2013, 59 (04) : 614 - 626
  • [43] Depth Coding Using a Boundary Reconstruction Filter for 3-D Video Systems
    Oh, Kwan-Jung
    Vetro, Anthony
    Ho, Yo-Sung
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2011, 21 (03) : 350 - 359
  • [44] Learning Depth from Focus in the Wild
    Won, Changyeon
    Jeon, Hae-Gon
    COMPUTER VISION - ECCV 2022, PT I, 2022, 13661 : 1 - 18
  • [45] A Deep Sequence Learning Framework for Action Recognition in Small-Scale Depth Video Dataset
    Bulbul, Mohammad Farhad
    Ullah, Amin
    Ali, Hazrat
    Kim, Daijin
    SENSORS, 2022, 22 (18)
  • [46] Real-time video fusion based on multistage hashing and hybrid transformation with depth adjustment
    Zhao, Hongjian
    Xia, Shixiong
    Yao, Rui
    Niu, Qiang
    Zhou, Yong
    JOURNAL OF ELECTRONIC IMAGING, 2015, 24 (06)
  • [47] A multi-frame fusion video deraining neural network based on depth and luminance features
    Li, Fengqi
    Guo, Mengchao
    Su, Rui
    Wang, Yanjuan
    Wang, Yi
    Xu, Fengqiang
    DISPLAYS, 2024, 85
  • [48] EXAMPLE BASED DEPTH FROM FOG
    Gibson, Kristofor B.
    Belongie, Serge J.
    Nguyen, Truong Q.
    2013 20TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP 2013), 2013, : 728 - 732
  • [49] VIDEO PROCESSING AND 3D MODELLING OF CHEST MOVEMENT USING MS KINECT DEPTH SENSOR
    Prochazka, Ales
    Vysata, Oldrich
    Schaetz, Martin
    Charvatova, Hana
    Suarez Araujo, Carmen Paz
    Geman, Oana
    Marik, Vladimir
    2016 INTERNATIONAL WORKSHOP ON COMPUTATIONAL INTELLIGENCE FOR MULTIMEDIA UNDERSTANDING (IWCIM), 2016,
  • [50] DaGAN plus plus : Depth-Aware Generative Adversarial Network for Talking Head Video Generation
    Hong, Fa-Ting
    Shen, Li
    Xu, Dan
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (05) : 2997 - 3012