Saliency Integration: An Arbitrator Model

被引:11
作者
Xu, Yingyue [1 ]
Hong, Xiaopeng [1 ]
Porikli, Fatih [2 ]
Liu, Xin [1 ]
Chen, Jie [1 ]
Zhao, Guoying [1 ]
机构
[1] Univ Oulu, Ctr Machine Vision & Signal Anal, Oulu 90014, Finland
[2] Australian Natl Univ, Res Sch Engn, Canberra, ACT 0200, Australia
关键词
Saliency integration; saliency aggregation; online model; arbitrator model; OBJECT DETECTION; ATTENTION;
D O I
10.1109/TMM.2018.2856126
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Saliency integration has attracted much attention on unifying saliency maps from multiple saliency models. Previous offline integration methods usually face two challenges: 1) if most of the candidate saliency models misjudge the saliency on an image, the integration result will lean heavily on those inferior candidate models; and 2) an unawareness of the ground truth saliency labels brings difficulty in estimating the expertise of each candidate model. To address these problems, in this paper, we propose an arbitrator model (AM) for saliency integration. First, we incorporate the consensus of multiple saliency models and the external knowledge into a reference map to effectively rectify the misleading by candidate models. Second, our quest for ways of estimating the expertise of the saliency models without ground truth labels gives rise to two distinct online model-expertise estimation methods. Finally, we derive a Bayesian integration framework to reconcile the saliency models of varying expertise and the reference map. To extensively evaluate the proposed AM model, we test 27 state-of-the-art saliency models, covering both traditional and deep learning ones, on various combinations over four datasets. The evaluation results show that the AM model improves the performance substantially compared to the existing state-of-the-art integration methods, regardless of the chosen candidate saliency models.
引用
收藏
页码:98 / 113
页数:16
相关论文
共 79 条
[1]   SLIC Superpixels Compared to State-of-the-Art Superpixel Methods [J].
Achanta, Radhakrishna ;
Shaji, Appu ;
Smith, Kevin ;
Lucchi, Aurelien ;
Fua, Pascal ;
Suesstrunk, Sabine .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2012, 34 (11) :2274-2281
[2]  
Achanta R, 2009, PROC CVPR IEEE, P1597, DOI 10.1109/CVPRW.2009.5206596
[3]  
[Anonymous], P AS C COMP VIS
[4]  
[Anonymous], 1951, The general and logical theory of automata, DOI DOI 10.1126/SCIENCE.115.2990.440
[5]   Salient Object Detection: A Benchmark [J].
Borji, Ali ;
Sihite, Dicky N. ;
Itti, Laurent .
COMPUTER VISION - ECCV 2012, PT II, 2012, 7573 :414-429
[6]  
Bruce N., 2005, P P ADV NEUR INF PRO, P155, DOI DOI 10.5555/2976248.2976268
[7]   A Deeper Look at Saliency: Feature Contrast, Semantics, and Beyond [J].
Bruce, Neil D. B. ;
Catton, Christopher ;
Janjic, Sasa .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :516-524
[8]   Global Contrast based Salient Region Detection [J].
Cheng, Ming-Ming ;
Zhang, Guo-Xin ;
Mitra, Niloy J. ;
Huang, Xiaolei ;
Hu, Shi-Min .
2011 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2011, :409-416
[9]  
Dai J., 2016, Advances in Neural Information Processing Systems, V198, P379
[10]   Visual saliency estimation by nonlinearly integrating features using region covariances [J].
Erdem, Erkut ;
Erdem, Aykut .
JOURNAL OF VISION, 2013, 13 (04)