Depth Estimation Method for Monocular Camera Defocus Images in Microscopic Scenes

被引:66
作者
Ban, Yuxi [1 ]
Liu, Mingzhe [2 ]
Wu, Peng [1 ]
Yang, Bo [1 ]
Liu, Shan [1 ]
Yin, Lirong [3 ]
Zheng, Wenfeng [1 ]
机构
[1] Univ Elect Sci & Technol China, Sch Automat, Chengdu 610054, Peoples R China
[2] Chengdu Univ Technol, Coll Comp Sci & Cyber Secur, Chengdu 610059, Peoples R China
[3] Louisiana State Univ, Dept Geog & Anthropol, Baton Rouge, LA 70803 USA
关键词
defocusing image; depth estimation; Markov random field; microscopic scene; geometric constraints; point spread function; FUSION; STEREO; MOTION;
D O I
10.3390/electronics11132012
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
When using a monocular camera for detection or observation, one only obtain two-dimensional information, which is far from adequate for surgical robot manipulation and workpiece detection. Therefore, at this scale, obtaining three-dimensional information of the observed object, especially the depth information estimation of the surface points of each object, has become a key issue. This paper proposes two methods to solve the problem of depth estimation of defiant images in microscopic scenes. These are the depth estimation method of the defocused image based on a Markov random field, and the method based on geometric constraints. According to the real aperture imaging principle, the geometric constraints on the relative defocus parameters of the point spread function are derived, which improves the traditional iterative method and improves the algorithm's efficiency.
引用
收藏
页数:15
相关论文
共 36 条
  • [1] Brand M, 2001, PROC CVPR IEEE, P456
  • [2] Chaudhuri S., 2012, Depth From Defocus: A Real Aperture Imaging Approach
  • [3] In Silico Labeling: Predicting Fluorescent Labels in Unlabeled Images
    Christiansen, Eric M.
    Yang, Samuel J.
    Ando, D. Michael
    Javaherian, Ashkan
    Skibinski, Gaia
    Lipnick, Scott
    Mount, Elliot
    O'Neil, Alison
    Shah, Kevan
    Lee, Alicia K.
    Goyal, Piyush
    Fedus, William
    Poplin, Ryan
    Esteva, Andre
    Berndl, Marc
    Rubin, Lee L.
    Nelson, Philip
    Finkbeiner, Steven
    [J]. CELL, 2018, 173 (03) : 792 - +
  • [4] COSTEIRA J, 1995, FIFTH INTERNATIONAL CONFERENCE ON COMPUTER VISION, PROCEEDINGS, P1071, DOI 10.1109/ICCV.1995.466815
  • [5] Multi-scale Relation Network for Few-Shot Learning Based on Meta-learning
    Ding, Yueming
    Tian, Xia
    Yin, Lirong
    Chen, Xiaobing
    Liu, Shan
    Yang, Bo
    Zheng, Wenfeng
    [J]. COMPUTER VISION SYSTEMS (ICVS 2019), 2019, 11754 : 343 - 352
  • [6] Silhouette and stereo fusion for 3D object modeling
    Esteban, CH
    Schmitt, F
    [J]. COMPUTER VISION AND IMAGE UNDERSTANDING, 2004, 96 (03) : 367 - 392
  • [7] Multi-focus image fusion using pulse coupled neural network
    Huang, Wei
    Jing, Zhongliang
    [J]. PATTERN RECOGNITION LETTERS, 2007, 28 (09) : 1123 - 1132
  • [8] Evaluation of focus measures in multi-focus image fusion
    Huang, Wei
    Jing, Zhongliang
    [J]. PATTERN RECOGNITION LETTERS, 2007, 28 (04) : 493 - 500
  • [9] Distributed robust H∞ composite-rotating consensus of second-order multi-agent systems
    Huang, Weizheng
    Zheng, Wenfeng
    Mo, Lipo
    [J]. INTERNATIONAL JOURNAL OF DISTRIBUTED SENSOR NETWORKS, 2017, 13 (07):
  • [10] Irani M., 1999, Proceedings of the Seventh IEEE International Conference on Computer Vision, P626, DOI 10.1109/ICCV.1999.791283