PISA: Pixelwise Image Saliency by Aggregating Complementary Appearance Contrast Measures With Edge-Preserving Coherence

被引:73
作者
Wang, Keze [1 ]
Lin, Liang [1 ]
Lu, Jiangbo [2 ]
Li, Chenglong [3 ]
Shi, Keyang [1 ]
机构
[1] Sun Yat Sen Univ, Guangzhou 510275, Guangdong, Peoples R China
[2] Adv Digital Sci Ctr, Singapore 138632, Singapore
[3] Anhui Univ, Hefei 230601, Peoples R China
关键词
Visual saliency; object detection; feature engineering; image filtering; VISUAL-ATTENTION; OBJECT DETECTION;
D O I
10.1109/TIP.2015.2432712
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Driven by recent vision and graphics applications such as image segmentation and object recognition, computing pixel-accurate saliency values to uniformly highlight foreground objects becomes increasingly important. In this paper, we propose a unified framework called pixelwise image saliency aggregating (PISA) various bottom-up cues and priors. It generates spatially coherent yet detail-preserving, pixel-accurate, and fine-grained saliency, and overcomes the limitations of previous methods, which use homogeneous superpixel based and color only treatment. PISA aggregates multiple saliency cues in a global context, such as complementary color and structure contrast measures, with their spatial priors in the image domain. The saliency confidence is further jointly modeled with a neighborhood consistence constraint into an energy minimization formulation, in which each pixel will be evaluated with multiple hypothetical saliency levels. Instead of using global discrete optimization methods, we employ the cost-volume filtering technique to solve our formulation, assigning the saliency levels smoothly while preserving the edge-aware structure details. In addition, a faster version of PISA is developed using a gradient-driven image subsampling strategy to greatly improve the runtime efficiency while keeping comparable detection accuracy. Extensive experiments on a number of public data sets suggest that PISA convincingly outperforms other state-of-the-art approaches. In addition, with this work, we also create a new data set containing 800 commodity images for evaluating saliency detection.
引用
收藏
页码:3019 / 3033
页数:15
相关论文
共 44 条
[1]  
Achanta R, 2009, PROC CVPR IEEE, P1597, DOI 10.1109/CVPRW.2009.5206596
[2]  
Alpert S., 2007, PROC IEEE C COMPUT V, P1
[3]  
[Anonymous], 2007, 2007 IEEE C COMPUTER, DOI [DOI 10.1109/CVPR.2007.383267, 10.1109/CVPR.2007.383267]
[4]  
Chang KY, 2011, IEEE I CONF COMP VIS, P914, DOI 10.1109/ICCV.2011.6126333
[5]   Efficient Salient Region Detection with Soft Image Abstraction [J].
Cheng, Ming-Ming ;
Warrell, Jonathan ;
Lin, Wen-Yan ;
Zheng, Shuai ;
Vineet, Vibhav ;
Crook, Nigel .
2013 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2013, :1529-1536
[6]   Does luminance-contrast contribute to a saliency map for overt visual attention? [J].
Einhäuser, W ;
König, P .
EUROPEAN JOURNAL OF NEUROSCIENCE, 2003, 17 (05) :1089-1097
[7]   Saliency Detection for Stereoscopic Images [J].
Fang, Yuming ;
Wang, Junle ;
Narwaria, Manish ;
Le Callet, Patrick ;
Lin, Weisi .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2014, 23 (06) :2625-2636
[8]  
Felzenszwalb PR, 2004, PROC CVPR IEEE, P261
[9]  
Feng J, 2011, IEEE I CONF COMP VIS, P1028, DOI 10.1109/ICCV.2011.6126348
[10]   Context-Aware Saliency Detection [J].
Goferman, Stas ;
Zelnik-Manor, Lihi ;
Tal, Ayellet .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2012, 34 (10) :1915-1926