Bayesian Depth-From-Defocus With Shading Constraints

被引:6
|
作者
Li, Chen [1 ]
Su, Shuochen [2 ]
Matsushita, Yasuyuki [3 ]
Zhou, Kun [4 ]
Lin, Stephen [5 ]
机构
[1] Zhejiang Univ, State Key Lab CAD & CG, Hangzhou 310058, Zhejiang, Peoples R China
[2] Univ British Columbia, Vancouver, BC V6T 1Z4, Canada
[3] Osaka Univ, Osaka 5650871, Japan
[4] Zhejiang Univ, State Key Lab CAD & CG, Hangzhou 310027, Peoples R China
[5] Microsoft Res, Beijing 100080, Peoples R China
基金
中国国家自然科学基金;
关键词
Depth-from-defocus; shape-from-shading; illumination estimation; INCOMPLETE DATA; SHAPE; STEREO; RESTORATION; LIKELIHOOD; RECOVERY;
D O I
10.1109/TIP.2015.2507403
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We present a method that enhances the performance of depth-from-defocus (DFD) through the use of shading information. DFD suffers from important limitations-namely coarse shape reconstruction and poor accuracy on textureless surfaces-that can be overcome with the help of shading. We integrate both forms of data within a Bayesian framework that capitalizes on their relative strengths. Shading data, however, is challenging to accurately recover from surfaces that contain texture. To address this issue, we propose an iterative technique that utilizes depth information to improve shading estimation, which in turn is used to elevate depth estimation in the presence of textures. The shading estimation can be performed in general scenes with unknown illumination using an approximate estimate of scene lighting. With this approach, we demonstrate improvements over existing DFD techniques, as well as effective shape reconstruction of textureless surfaces.
引用
收藏
页码:589 / 600
页数:12
相关论文
共 50 条
  • [31] An efficient method for monocular depth from defocus
    Leroy, Jean-Vincent
    Simon, Thierry
    Descenes, Francois
    PROCEEDINGS ELMAR-2008, VOLS 1 AND 2, 2008, : 133 - +
  • [32] Fast depth from defocus from focal stacks
    Stephen W. Bailey
    Jose I. Echevarria
    Bobby Bodenheimer
    Diego Gutierrez
    The Visual Computer, 2015, 31 : 1697 - 1708
  • [33] \ Real time monocular depth from defocus
    Leroy, Jean-Vincent
    Simon, Thierry
    Deschenes, Francois
    IMAGE AND SIGNAL PROCESSING, 2008, 5099 : 103 - +
  • [34] What is a Good Model for Depth from Defocus?
    Mannan, Fahim
    Langer, Michael S.
    2016 13TH CONFERENCE ON COMPUTER AND ROBOT VISION (CRV), 2016, : 273 - 280
  • [35] Depth from defocus using wavelet transform
    Asif, M
    Choi, TS
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2004, E87D (01): : 250 - 253
  • [36] Depth from defocus estimation in spatial domain
    Ziou, D
    Deschenes, F
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2001, 81 (02) : 143 - 165
  • [37] Learnable Polarization-multiplexed Modulation Imager for Depth from Defocus
    Huang, Zhiwei
    Dai, Mingyou
    Yue, Tao
    Hu, Xuemei
    2023 IEEE INTERNATIONAL CONFERENCE ON COMPUTATIONAL PHOTOGRAPHY, ICCP, 2023,
  • [38] Half-Sweep Imaging for Depth from Defocus
    Matsui, Shuhei
    Nagahara, Hajime
    Taniguchi, Rin-ichiro
    ADVANCES IN IMAGE AND VIDEO TECHNOLOGY, PT I, 2011, 7087 : 335 - 347
  • [39] Global Depth from Defocus with Fixed Camera Parameters
    Wei, Yangjie
    Dong, Zaili
    Wu, Chengdong
    2009 IEEE INTERNATIONAL CONFERENCE ON MECHATRONICS AND AUTOMATION, VOLS 1-7, CONFERENCE PROCEEDINGS, 2009, : 1887 - +
  • [40] A bin picking system based on depth from defocus
    Ghita, O
    Whelan, PF
    MACHINE VISION AND APPLICATIONS, 2003, 13 (04) : 234 - 244