Partial Occlusion Handling in Pedestrian Detection With a Deep Model

被引:45
作者
Ouyang, Wanli [1 ]
Zeng, Xingyu [1 ]
Wang, Xiaogang [1 ]
机构
[1] Chinese Univ Hong Kong, Dept Elect Engn, Hong Kong, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
Deep model; human detection; object detection; occlusion handling; pedestrian detection; PARTIALLY OCCLUDED HUMANS; BAYESIAN COMBINATION; PICTORIAL STRUCTURES; OBJECT DETECTION; REPRESENTATION; HISTOGRAMS; MULTIPLE; NETWORK;
D O I
10.1109/TCSVT.2015.2501940
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Part-based models have demonstrated their merit in object detection. However, there is a key issue to be solved on how to integrate the inaccurate scores of part detectors when there are occlusions, abnormal deformations, appearances, or illuminations. To handle the imperfection of part detectors, this paper presents a probabilistic pedestrian detection framework. In this framework, a deformable part-based model is used to obtain the scores of part detectors and the visibilities of parts are modeled as hidden variables. Once the occluded parts are identified, their effects are properly removed from the final detection score. Unlike previous occlusion handling approaches that assumed independence among the visibility probabilities of parts or manually defined rules for the visibility relationship, a deep model is proposed in this paper for learning the visibility relationship among overlapping parts at multiple layers. The proposed approach can be viewed as a general postprocessing of part-detection results and can take detection scores of existing part-based models as input. The experimental results on three public datasets (Caltech, ETH, and Daimler) and a new CUHK occlusion dataset (http://www.ee.cuhk.edu.hk/similar to xgwang/CUHK_pedestrian.html), which is specially designed for the evaluation of occlusion handling approaches, show the effectiveness of the proposed approach.
引用
收藏
页码:2123 / 2137
页数:15
相关论文
共 115 条
[1]  
Andriluka M, 2009, PROC CVPR IEEE, P1014, DOI 10.1109/CVPRW.2009.5206754
[2]  
[Anonymous], 1986, P 1986 PARALLEL DIST
[3]  
[Anonymous], 2014, P 27 INT C NEURAL IN
[4]  
[Anonymous], 2014, PROC 2 INT C LEARN R
[5]  
[Anonymous], INT J COMPU IN PRESS
[6]  
[Anonymous], IMAGENET LARGE SCALE
[7]  
[Anonymous], 2014, 2 INT C LEARN REPR I
[8]  
[Anonymous], PROC CVPR IEEE
[9]  
[Anonymous], P ICML
[10]  
[Anonymous], 2009, ICML