Bayesian Depth-From-Defocus With Shading Constraints

被引:6
|
作者
Li, Chen [1 ]
Su, Shuochen [2 ]
Matsushita, Yasuyuki [3 ]
Zhou, Kun [4 ]
Lin, Stephen [5 ]
机构
[1] Zhejiang Univ, State Key Lab CAD & CG, Hangzhou 310058, Zhejiang, Peoples R China
[2] Univ British Columbia, Vancouver, BC V6T 1Z4, Canada
[3] Osaka Univ, Osaka 5650871, Japan
[4] Zhejiang Univ, State Key Lab CAD & CG, Hangzhou 310027, Peoples R China
[5] Microsoft Res, Beijing 100080, Peoples R China
基金
中国国家自然科学基金;
关键词
Depth-from-defocus; shape-from-shading; illumination estimation; INCOMPLETE DATA; SHAPE; STEREO; RESTORATION; LIKELIHOOD; RECOVERY;
D O I
10.1109/TIP.2015.2507403
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We present a method that enhances the performance of depth-from-defocus (DFD) through the use of shading information. DFD suffers from important limitations-namely coarse shape reconstruction and poor accuracy on textureless surfaces-that can be overcome with the help of shading. We integrate both forms of data within a Bayesian framework that capitalizes on their relative strengths. Shading data, however, is challenging to accurately recover from surfaces that contain texture. To address this issue, we propose an iterative technique that utilizes depth information to improve shading estimation, which in turn is used to elevate depth estimation in the presence of textures. The shading estimation can be performed in general scenes with unknown illumination using an approximate estimate of scene lighting. With this approach, we demonstrate improvements over existing DFD techniques, as well as effective shape reconstruction of textureless surfaces.
引用
收藏
页码:589 / 600
页数:12
相关论文
共 50 条
  • [21] REGULARIZED DEPTH FROM DEFOCUS
    Namboodiri, Vinay P.
    Chaudhuri, Subhasis
    Hadap, Sunil
    2008 15TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, VOLS 1-5, 2008, : 1520 - 1523
  • [22] Perceived depth from shading boundaries
    Kim, Juno
    Anstis, Stuart
    JOURNAL OF VISION, 2016, 16 (06):
  • [23] Depth from Defocus Using Geometric Optics Regularization
    Wu, Qiufeng
    Wang, Kuanquan
    Zuo, Wangmeng
    ADVANCES IN APPLIED SCIENCE, ENGINEERING AND TECHNOLOGY, 2013, 709 : 511 - 514
  • [24] Coded Aperture Pairs for Depth from Defocus and Defocus Deblurring
    Changyin Zhou
    Stephen Lin
    Shree K. Nayar
    International Journal of Computer Vision, 2011, 93 : 53 - 72
  • [25] Depth from Defocus Technique Based on Cross Reblurring
    Takemura, Kazumi
    Yoshida, Toshiyuki
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2019, E102D (11) : 2083 - 2092
  • [26] A Unified Approach for Registration and Depth in Depth from Defocus
    Ben-Ari, Rami
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2014, 36 (06) : 1041 - 1055
  • [27] Blur Calibration for Depth from Defocus
    Mannan, Fahim
    Langer, Michael S.
    2016 13TH CONFERENCE ON COMPUTER AND ROBOT VISION (CRV), 2016, : 281 - 288
  • [28] Depth from Defocus as a Special Case of the Transport of Intensity Equation
    Alexander, Emma
    Kabuli, Leyla A.
    Cossairt, Oliver S.
    Waller, Laura
    2021 IEEE INTERNATIONAL CONFERENCE ON COMPUTATIONAL PHOTOGRAPHY (ICCP), 2021,
  • [29] Depth from Defocus Applied to Auto Focus
    Yasugi, Shunsuke
    Nguyen, Khang
    Ezawa, Kozo
    Kawamura, Takashi
    2014 IEEE 3RD GLOBAL CONFERENCE ON CONSUMER ELECTRONICS (GCCE), 2014, : 171 - 173
  • [30] ROTATING CODED APERTURE FOR DEPTH FROM DEFOCUS
    Yang, Jingyu
    Ma, Jinlong
    Jiang, Bin
    2016 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING PROCEEDINGS, 2016, : 1726 - 1730