Image parsing: Unifying segmentation, detection, and recognition

被引:260
作者
Tu, ZW [1 ]
Chen, XG
Yuille, AL
Zhu, SC
机构
[1] Univ Calif Los Angeles, Dept Stat, Los Angeles, CA 90095 USA
[2] Univ Calif Los Angeles, Dept Psychol, Los Angeles, CA 90095 USA
[3] Univ Calif Los Angeles, Dept Comp Sci, Los Angeles, CA 90095 USA
基金
美国国家卫生研究院; 美国国家科学基金会;
关键词
image parsing; image segmentation; object detection; object recognition; data driven Markov Chain Monte Carlo; AdaBoost;
D O I
10.1007/s11263-005-6642-x
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper we present a Bayesian framework for parsing images into their constituent visual patterns. The parsing algorithm optimizes the posterior probability and outputs a scene representation as a "parsing graph", in a spirit similar to parsing sentences in speech and natural language. The algorithm constructs the parsing graph and re-configures it dynamically using a set of moves, which are mostly reversible Markov chain jumps. This computational framework integrates two popular inference approaches-generative (top-down) methods and discriminative (bottom-up) methods. The former formulates the posterior probability in terms of generative models for images defined by likelihood functions and priors. The latter computes discriminative probabilities based on a sequence (cascade) of bottom-up tests/filters. In our Markov chain algorithm design, the posterior probability, defined by the generative models, is the invariant (target) probability for the Markov chain, and the discriminative probabilities are used to construct proposal probabilities to drive the Markov chain. Intuitively, the bottom-up discriminative probabilities activate top-down generative models. In this paper, we focus on two types of visual patterns-generic visual patterns, such as texture and shading, and object patterns including human faces and text. These types of patterns compete and cooperate to explain the image and so image parsing unifies image segmentation, object detection, and recognition of we use generic visual patterns only then image parsing will correspond to image segmentation (Tu and Zhu, 2002. IEEE Trans. PAM1, 24(5):657-673). We illustrate our algorithm on natural images of complex city scenes and show examples where image segmentation can be improved by allowing object specific knowledge to disambiguate low-level segmentation cues, and conversely where object detection can be improved by using generic visual patterns to explain away shadows and occlusions.
引用
收藏
页码:113 / 140
页数:28
相关论文
共 59 条
  • [1] [Anonymous], P IEEE C COMP VIS PA
  • [2] [Anonymous], 2003, Foundations of Statistical Natural Language Processing
  • [3] [Anonymous], CONT MATH
  • [4] [Anonymous], NIPS
  • [5] [Anonymous], P CVPR
  • [6] [Anonymous], RECONNAISSANCE FORME
  • [7] BARBU A, 2004, P IEEE C COMP VIS PA
  • [8] BARBU A, 2003, P INT C COMP VIS NIC
  • [9] Barnard K., 2001, ICCV
  • [10] Shape matching and object recognition using shape contexts
    Belongie, S
    Malik, J
    Puzicha, J
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2002, 24 (04) : 509 - 522