Visual search and real-image similarity: An empirical assessment through the lens of deep learning

被引:0
|
作者
Petilli, Marco A. [1 ]
Rodio, Francesca M. [2 ,5 ]
Guenther, Fritz [3 ]
Marelli, Marco [1 ,4 ]
机构
[1] Univ Milano Bicocca, Dept Psychol, Milan, Italy
[2] IUSS, Inst Adv Studies, Pavia, Italy
[3] Humboldt Univ, Dept Psychol, Berlin, Germany
[4] Milan Ctr Neurosci, NeuroMI, Milan, Italy
[5] Univ Pavia, Dept Brain & Behav Sci, Pavia, Italy
关键词
Visual search; Visual similarity; Perceptual processing; Convolutional neural networks; Search efficiency; Computer vision; ATTENTIONAL SELECTION; ORIENTATION; MODEL; OBJECTS; VISION;
D O I
10.3758/s13423-024-02583-4
中图分类号
B841 [心理学研究方法];
学科分类号
040201 ;
摘要
The ability to predict how efficiently a person finds an object in the environment is a crucial goal of attention research. Central to this issue are the similarity principles initially proposed by Duncan and Humphreys, which outline how the similarity between target and distractor objects (TD) and between distractor objects themselves (DD) affect search efficiency. However, the search principles lack direct quantitative support from an ecological perspective, being a summary approximation of a wide range of lab-based results poorly generalisable to real-world scenarios. This study exploits deep convolutional neural networks to predict human search efficiency from computational estimates of similarity between objects populating, potentially, any visual scene. Our results provide ecological evidence supporting the similarity principles: search performance continuously varies across tasks and conditions and improves with decreasing TD similarity and increasing DD similarity. Furthermore, our results reveal a crucial dissociation: TD and DD similarities mainly operate at two distinct layers of the network: DD similarity at the intermediate layers of coarse object features and TD similarity at the final layers of complex features used for classification. This suggests that these different similarities exert their major effects at two distinct perceptual levels and demonstrates our methodology's potential to offer insights into the depth of visual processing on which the search relies. By combining computational techniques with visual search principles, this approach aligns with modern trends in other research areas and fulfils longstanding demands for more ecologically valid research in the field of visual search.
引用
收藏
页码:822 / 838
页数:17
相关论文
共 50 条
  • [31] Visual search for real world targets under conditions of high target-background similarity: Exploring training and transfer in younger and older adults
    Neider, Mark B.
    Boot, Walter R.
    Kramer, Arthur F.
    ACTA PSYCHOLOGICA, 2010, 134 (01) : 29 - 39
  • [32] A Deep Blind Image Quality Assessment with Visual Importance Based Patch Score
    Lv, Zhengyi
    Wang, Xiaochuan
    Wang, Kai
    Liang, Xiaohui
    COMPUTER VISION - ACCV 2018, PT II, 2019, 11362 : 147 - 162
  • [33] On-the-fly learning for visual search of large-scale image and video datasets
    Chatfield, Ken
    Arandjelovic, Relja
    Parkhi, Omkar
    Zisserman, Andrew
    INTERNATIONAL JOURNAL OF MULTIMEDIA INFORMATION RETRIEVAL, 2015, 4 (02) : 75 - 93
  • [34] Learning deep similarity metric for 3D MR-TRUS image registration
    Haskins, Grant
    Kruecker, Jochen
    Kruger, Uwe
    Xu, Sheng
    Pinto, Peter A.
    Wood, Brad J.
    Yan, Pingkun
    INTERNATIONAL JOURNAL OF COMPUTER ASSISTED RADIOLOGY AND SURGERY, 2019, 14 (03) : 417 - 425
  • [35] Deep Learning Based Image Aesthetic Quality Assessment-A Review
    Chounchenani, Maedeh daryanavard
    Shahbahrami, Asadollah
    Hassanpour, Reza
    Gaydadjiev, Georgi
    ACM COMPUTING SURVEYS, 2025, 57 (07)
  • [36] Empirical Analysis of Feature Reduction in Deep Learning and Conventional Methods for Foot Image Classification
    Jaruenpunyasak, Jermphiphut
    Duangsoithong, Rakkrit
    IEEE ACCESS, 2021, 9 : 53133 - 53145
  • [37] Prediction of milk yield using visual images of cows through deep learning
    Jembere, L.
    Chimonyo, M.
    SOUTH AFRICAN JOURNAL OF ANIMAL SCIENCE, 2024, 54 (01) : 47 - 58
  • [38] Real-time data visual monitoring of triboelectric nanogenerators enabled by Deep learning
    Zhang, Huiya
    Liu, Tao
    Zou, Xuelian
    Zhu, Yunpeng
    Chi, Mingchao
    Wu, Di
    Jiang, Keyang
    Zhu, Sijia
    Zhai, Wenxia
    Wang, Shuangfei
    Nie, Shuangxi
    Wang, Zhiwei
    NANO ENERGY, 2024, 130
  • [39] Enhancing visual quality of spatial image steganography using SqueezeNet deep learning network
    Nagham Hamid
    Balasem Salem Sumait
    Bilal Ibrahim Bakri
    Osamah Al-Qershi
    Multimedia Tools and Applications, 2021, 80 : 36093 - 36109
  • [40] Enhancing visual quality of spatial image steganography using SqueezeNet deep learning network
    Hamid, Nagham
    Sumait, Balasem Salem
    Bakri, Bilal Ibrahim
    Al-Qershi, Osamah
    MULTIMEDIA TOOLS AND APPLICATIONS, 2021, 80 (28-29) : 36093 - 36109