Projection defocus analysis for scene capture and image display

被引:99
作者
Zhang, Li [1 ]
Nayar, Shree [1 ]
机构
[1] Columbia Univ, Comp Vis Lab, New York, NY 10027 USA
来源
ACM TRANSACTIONS ON GRAPHICS | 2006年 / 25卷 / 03期
关键词
projector defocus; temporal defocus analysis; depth recovery; multi-focal projection; projector depixelation; refocus synthesis; image composition;
D O I
10.1145/1141911.1141974
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
In order to produce bright images, projectors have large apertures and hence narrow depths of field. In this paper, we present methods for robust scene capture and enhanced image display based on projection defocus analysis. We model a projector's defocus using a linear system. This model is used to develop a novel temporal defocus analysis method to recover depth at each camera pixel by estimating the parameters of its projection defocus kernel in frequency domain. Compared to most depth recovery methods, our approach is more accurate near depth discontinuities. Furthermore, by using a coaxial projector-camera system, we ensure that depth is computed at all camera pixels, without any missing parts. We show that the recovered scene geometry can be used for refocus synthesis and for depth-based image composition. Using the same projector defocus model and estimation technique, we also propose a defocus compensation method that filters a projection image in a spatially-varying, depth-dependent manner to minimize its defocus blur after it is projected onto the scene. This method effectively increases the depth of field of a projector without modifying its optics. Finally, we present an algorithm that exploits projector defocus to reduce the strong pixelation artifacts produced by digital projectors, while preserving the quality of the projected image. We have experimentally verified each of our methods using real scenes.
引用
收藏
页码:907 / 915
页数:9
相关论文
共 37 条
  • [1] [Anonymous], 1993, Three-Dimensional Computer Vision: A Geometric Viewpoint
  • [2] ASKAR R, 1998, SIGGRAPH C P, P179
  • [3] Bimber O, 2005, INTERNATIONAL SYMPOSIUM ON MIXED AND AUGMENTED REALITY, PROCEEDINGS, P14
  • [4] BIMBER O, 2006, IN PRESS IEEE T VISU
  • [5] CURLESS B, 1995, FIFTH INTERNATIONAL CONFERENCE ON COMPUTER VISION, PROCEEDINGS, P987, DOI 10.1109/ICCV.1995.466772
  • [6] Spacetime stereo: A unifying framework for depth from triangulation
    Davis, J
    Nehab, D
    Ramamoorthi, R
    Rusinkiewicz, S
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2005, 27 (02) : 296 - 302
  • [7] FAVARO P, 2005, IN PRESS IEEE T PATT
  • [8] A projector-camera system with real-time photometric adaptation for dynamic environments
    Fujii, K
    Grossberg, MD
    Nayar, SK
    [J]. 2005 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOL 1, PROCEEDINGS, 2005, : 814 - 821
  • [9] GIROD B, 1989, P SPIE C OPT ILL IM
  • [10] Gonzalez-Banos H, 2004, PROC CVPR IEEE, P234