A Novel Depth from Defocus Framework Based on a Thick Lens Camera Model

被引:2
|
作者
Bailey, Matthew [1 ]
Guillemaut, Jean-Yves [1 ]
机构
[1] Univ Surrey, Ctr Vis Speech & Signal Proc, Guildford, Surrey, England
来源
2020 INTERNATIONAL CONFERENCE ON 3D VISION (3DV 2020) | 2020年
基金
英国工程与自然科学研究理事会;
关键词
ENERGY MINIMIZATION; SHAPE; RECOVERY;
D O I
10.1109/3DV50981.2020.00131
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Reconstruction approaches based on monocular defocus analysis such as Depth from Defocus (DFD) often utilise the thin lens camera model. Despite this widespread adoption, there are inherent limitations associated with it. Coupled with invalid parameterisation commonplace in literature, the overly-simplified image formation it describes leads to inaccurate defocus modelling; especially in macro-scale scenes. As a result, DFD reconstructions based around this model are not geometrically consistent, and are typically restricted to single-view applications. Subsequently, the handful of existing approaches which attempt to include additional viewpoints have had only limited success. In this work, we address these issues by instead utilising a thick lens camera model, and propose a novel calibration procedure to accurately parameterise it. The effectiveness of our model and calibration is demonstrated with a novel DFD reconstruction framework. We achieve highly detailed, geometrically accurate and complete 3D models of real-world scenes from multi-view focal stacks. To our knowledge, this is the first time DFD has been successfully applied to complete scene modelling in this way.
引用
收藏
页码:1206 / 1215
页数:10
相关论文
共 24 条
  • [1] Uniting Stereo and Depth-from-Defocus: A Thin Lens-based Variational Framework for Multiview Reconstruction
    Friedlander, Robert D.
    Yang, Huizong
    Yezzi, Anthony J.
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, : 4401 - 4410
  • [2] Depth from Defocus Technique Based on Cross Reblurring
    Takemura, Kazumi
    Yoshida, Toshiyuki
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2019, E102D (11) : 2083 - 2092
  • [3] Active depth from defocus system using coherent illumination and a no moving parts camera
    Amin, M. Junaid
    Riza, Nabeel A.
    OPTICS COMMUNICATIONS, 2016, 359 : 135 - 145
  • [4] Theoretical performance model for single image depth from defocus
    Trouve-Peloux, Pauline
    Champagnat, Frederic
    Le Besnerais, Guy
    Idier, Jerome
    JOURNAL OF THE OPTICAL SOCIETY OF AMERICA A-OPTICS IMAGE SCIENCE AND VISION, 2014, 31 (12) : 2650 - 2662
  • [5] Nanoscale depth reconstruction from defocus: within an optical diffraction model
    Wei, Yangjie
    Wu, Chengdong
    Dong, Zaili
    OPTICS EXPRESS, 2014, 22 (21): : 25481 - 25493
  • [6] AFM Probes Depth Estimation from Convolutional Neural Networks Based Defocus Depth Measurement
    Yuan, Shuai
    Wang, Zebin
    Yang, Yongliang
    INSTRUMENTS AND EXPERIMENTAL TECHNIQUES, 2024, 67 (05) : 1024 - 1032
  • [7] Depth Estimation from Defocus Images Based on Oriented Heat-flows
    Hong, Liu
    Yu, Jia
    Hong, Cheng
    Sui, Wei
    2009 SECOND INTERNATIONAL CONFERENCE ON MACHINE VISION, PROCEEDINGS, ( ICMV 2009), 2009, : 212 - 215
  • [8] Depth from defocus (DFD) based on VFISTA optimization algorithm in micro/nanometer vision
    Liu, Yongjun
    Wei, Yangjie
    Wang, Yi
    CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2019, 22 (Suppl 1): : 1459 - 1467
  • [9] Rational-operator-based depth-from-defocus approach to scene reconstruction
    Li, Ang
    Staunton, Richard
    Tjahjadi, Tardi
    JOURNAL OF THE OPTICAL SOCIETY OF AMERICA A-OPTICS IMAGE SCIENCE AND VISION, 2013, 30 (09) : 1787 - 1795
  • [10] DEPTH-FROM-DEFOCUS-BASED CONVENIENT COAXIAL PROJECTION PROFILOMETRY FOR LARGE MEASUREMENT RANGE
    Song, Xiaokai
    Ding, Yating
    Ran, Zipeng
    Cai, Bolin
    Chen, Xiangcheng
    MECHATRONIC SYSTEMS AND CONTROL, 2024, 52 (01): : 32 - 41