What Do Different Evaluation Metrics Tell Us About Saliency Models?

被引:422
作者
Bylinskii, Zoya [1 ]
Judd, Tilke [2 ]
Oliva, Aude [1 ]
Torralba, Antonio [1 ]
Durand, Fredo [1 ]
机构
[1] MIT, Comp Sci & Artificial Intelligence Lab, 77 Massachusetts Ave, Cambridge, MA 02139 USA
[2] Google, Zurich, Switzerland
基金
加拿大自然科学与工程研究理事会;
关键词
Saliency models; evaluation metrics; benchmarks; fixation maps; saliency applications; VISUAL-ATTENTION; IMAGE RETRIEVAL; EYE-MOVEMENTS; LOCALIZATION; ALLOCATION; FOVEATION; SELECTION; SCENE; GAZE;
D O I
10.1109/TPAMI.2018.2815601
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
How best to evaluate a saliency model's ability to predict where humans look in images is an open research question. The choice of evaluation metric depends on how saliency is defined and how the ground truth is represented. Metrics differ in how they rank saliency models, and this results from how false positives and false negatives are treated, whether viewing biases are accounted for, whether spatial deviations are factored in, and how the saliency maps are pre-processed. In this paper, we provide an analysis of 8 different evaluation metrics and their properties. With the help of systematic experiments and visualizations of metric computations, we add interpretability to saliency scores and more transparency to the evaluation of saliency models. Building off the differences in metric properties and behaviors, we make recommendations for metric selections under specific assumptions and for specific applications.
引用
收藏
页码:740 / 757
页数:18
相关论文
共 88 条
  • [21] Saliency, attention, and visual search: An information theoretic approach
    Bruce, Neil D. B.
    Tsotsos, John K.
    [J]. JOURNAL OF VISION, 2009, 9 (03):
  • [22] Towards the quantitative evaluation of visual attention models
    Bylinskii, Z.
    DeGennaro, E. M.
    Rajalingham, R.
    Ruda, H.
    Zhang, J.
    Tsotsos, J. K.
    [J]. VISION RESEARCH, 2015, 116 : 258 - 268
  • [23] Where Should Saliency Models Look Next?
    Bylinskii, Zoya
    Recasens, Adria
    Borji, Ali
    Oliva, Aude
    Torralba, Antonio
    Durand, Fredo
    [J]. COMPUTER VISION - ECCV 2016, PT V, 2016, 9909 : 809 - 824
  • [24] High-level aspects of oculomotor control during viewing of natural-task images
    Canosa, RL
    Pelz, JB
    Mennie, NR
    Peak, J
    [J]. HUMAN VISION AND ELECTRONIC IMAGING VIII, 2003, 5007 : 240 - 251
  • [25] Mobile Robot Vision Navigation & Localization Using Gist and Saliency
    Chang, Chin-Kai
    Siagian, Christian
    Itti, Laurent
    [J]. IEEE/RSJ 2010 INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2010), 2010, : 4147 - 4154
  • [26] Deriving an appropriate baseline for describing fixation behaviour
    Clarke, Alasdair D. F.
    Tatler, Benjamin W.
    [J]. VISION RESEARCH, 2014, 102 : 41 - 51
  • [27] DeCarlo D, 2002, ACM T GRAPHIC, V21, P769, DOI 10.1145/566570.566650
  • [28] Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
  • [29] Task-demands can immediately reverse the effects of sensory-driven saliency in complex visual stimuli
    Einhaeuser, Wolfgang
    Rutishauser, Ueli
    Koch, Christof
    [J]. JOURNAL OF VISION, 2008, 8 (02):
  • [30] Does luminance-contrast contribute to a saliency map for overt visual attention?
    Einhäuser, W
    König, P
    [J]. EUROPEAN JOURNAL OF NEUROSCIENCE, 2003, 17 (05) : 1089 - 1097