Meaning maps detect the removal of local semantic scene content but deep saliency models do not

被引:2
|
作者
Hayes, Taylor R. [1 ]
Henderson, John M. [1 ,2 ]
机构
[1] Univ Calif Davis, Ctr Mind & Brain, Davis, CA 95616 USA
[2] Univ Calif Davis, Dept Psychol, Davis, CA 95616 USA
基金
美国国家卫生研究院;
关键词
Scene perception; Semantics; Image saliency; Meaning maps; Deep learning; EYE-MOVEMENTS; ATTENTION; GUIDANCE;
D O I
10.3758/s13414-021-02395-x
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
Meaning mapping uses human raters to estimate different semantic features in scenes, and has been a useful tool in demonstrating the important role semantics play in guiding attention. However, recent work has argued that meaning maps do not capture semantic content, but like deep learning models of scene attention, represent only semantically-neutral image features. In the present study, we directly tested this hypothesis using a diffeomorphic image transformation that is designed to remove the meaning of an image region while preserving its image features. Specifically, we tested whether meaning maps and three state-of-the-art deep learning models were sensitive to the loss of semantic content in this critical diffeomorphed scene region. The results were clear: meaning maps generated by human raters showed a large decrease in the diffeomorphed scene regions, while all three deep saliency models showed a moderate increase in the diffeomorphed scene regions. These results demonstrate that meaning maps reflect local semantic content in scenes while deep saliency models do something else. We conclude the meaning mapping approach is an effective tool for estimating semantic content in scenes.
引用
收藏
页码:647 / 654
页数:8
相关论文
共 27 条
  • [1] Meaning maps detect the removal of local semantic scene content but deep saliency models do not
    Taylor R. Hayes
    John M. Henderson
    Attention, Perception, & Psychophysics, 2022, 84 : 647 - 654
  • [2] Linearisation during language production: evidence from scene meaning and saliency maps
    Ferreira, Fernanda
    Rehrig, Gwendolyn
    LANGUAGE COGNITION AND NEUROSCIENCE, 2019, 34 (09) : 1129 - 1139
  • [3] Going Beyond Saliency Maps: Training Deep Models to Interpret Deep Models
    Liu, Zixuan
    Adeli, Ehsan
    Pohl, Kilian M.
    Zhao, Qingyu
    INFORMATION PROCESSING IN MEDICAL IMAGING, IPMI 2021, 2021, 12729 : 71 - 82
  • [4] Meaning maps and saliency models based on deep convolutional neural networks are insensitive to image meaning when predicting human fixations
    Pedziwiatr, Marek A.
    Kuemmerer, Matthias
    Wallis, Thomas S. A.
    Bethge, Matthias
    Teufel, Christoph
    COGNITION, 2021, 206
  • [5] Canonical Saliency Maps: Decoding Deep Face Models
    John T.A.
    Balasubramanian V.N.
    Jawahar C.V.
    IEEE Transactions on Biometrics, Behavior, and Identity Science, 2021, 3 (04): : 561 - 572
  • [6] Finding the meaning in meaning maps: Quantifying the roles of semantic and non-semantic scene information in guiding visual attention
    Leemans, Maarten
    Damiano, Claudia
    Wagemans, Johan
    COGNITION, 2024, 247
  • [7] DO DEEP-LEARNING SALIENCY MODELS REALLY MODEL SALIENCY?
    Kong, Phutphalla
    Mancas, Matei
    Thuon, Nimol
    Kheang, Seng
    Gosselin, Bernard
    2018 25TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2018, : 2331 - 2335
  • [8] Deep Global-Local Gazing: Including Global Scene Properties in Local Saliency Computation
    Zabihi, Samad
    Yazdi, Mehran
    Mansoori, Eghbal
    MOBILE INFORMATION SYSTEMS, 2022, 2022
  • [9] Attentive models in vision: Computing saliency maps in the deep learning era
    Cornia, Marcella
    Abati, Davide
    Baraldi, Lorenzo
    Palazzi, Andrea
    Calderara, Simone
    Cucchiara, Rita
    INTELLIGENZA ARTIFICIALE, 2018, 12 (02) : 161 - 175
  • [10] Attentive Models in Vision: Computing Saliency Maps in the Deep Learning Era
    Cornia, Marcella
    Abati, Davide
    Baraldi, Lorenzo
    Palazzi, Andrea
    Calderara, Simone
    Cucchiara, Rita
    AI*IA 2017 ADVANCES IN ARTIFICIAL INTELLIGENCE, 2017, 10640 : 387 - 399