Inherent Importance of Early Visual Features in Attraction of Human Attention

被引:3
作者
Eghdam, Reza [1 ,2 ]
Ebrahimpour, Reza [1 ,2 ]
Zabbah, Iman [3 ]
Zabbah, Sajjad [2 ]
机构
[1] Shahid Rajaee Teacher Training Univ, Fac Comp Engn, Tehran, Iran
[2] Inst Res Fundamental Sci IPM, Sch Cognit Sci SCS, Tehran, Iran
[3] Islamic Azad Univ, Torbat E Heydariyeh Branch, Dept Comp, Torbat E Heydariyeh, Iran
关键词
SALIENCY; MODEL;
D O I
10.1155/2020/3496432
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
Local contrasts attract human attention to different areas of an image. Studies have shown that orientation, color, and intensity are some basic visual features which their contrasts attract our attention. Since these features are in different modalities, their contribution in the attraction of human attention is not easily comparable. In this study, we investigated the importance of these three features in the attraction of human attention in synthetic and natural images. Choosing 100% percent detectable contrast in each modality, we studied the competition between different features. Psychophysics results showed that, although single features can be detected easily in all trials, when features were presented simultaneously in a stimulus, orientation always attracts subject's attention. In addition, computational results showed that orientation feature map is more informative about the pattern of human saccades in natural images. Finally, using optimization algorithms we quantified the impact of each feature map in construction of the final saliency map.
引用
收藏
页数:15
相关论文
共 50 条
  • [41] The development of visual attention in early infancy: Insights from a free-viewing paradigm
    Krieber-Tomantschger, Magdalena
    Pokorny, Florian B.
    Krieber-Tomantschger, Iris
    Langmann, Laura
    Poustka, Luise
    Zhang, Dajie
    Treue, Stefan
    Tanzer, Norbert K.
    Einspieler, Christa
    Marschik, Peter B.
    Korner, Christof
    INFANCY, 2022, 27 (02) : 433 - 458
  • [42] REAL-TIME ESTIMATION OF HUMAN VISUAL ATTENTION WITH DYNAMIC BAYESIAN NETWORK AND MCMC-BASED PARTICLE FILTER
    Miyazato, Kouji
    Kimura, Akisato
    Takagi, Shigeru
    Yamato, Junji
    ICME: 2009 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, VOLS 1-3, 2009, : 250 - +
  • [43] Predicting how color and shape combine in the human visual system to direct attention
    Buetti, Simona
    Xu, Jing
    Lleras, Alejandro
    SCIENTIFIC REPORTS, 2019, 9 (1)
  • [44] Not all visual features are created equal: Early processing in letter and word recognition
    Lanthier, Sophie N.
    Risko, Evan R.
    Stolzh, Jennifer A.
    Besner, Derek
    PSYCHONOMIC BULLETIN & REVIEW, 2009, 16 (01) : 67 - 73
  • [45] Predictive coding for motion stimuli in human early visual cortex
    Schellekens, Wouter
    van Wezel, Richard J. A.
    Petridou, Natalia
    Ramsey, Nick F.
    Raemaekers, Mathijs
    BRAIN STRUCTURE & FUNCTION, 2016, 221 (02) : 879 - 890
  • [46] An extensive evaluation of deep featuresof convolutional neural networks for saliency prediction of human visual attention
    Mahdi, Ali
    Qin, Jun
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2019, 65
  • [47] The Architecture of Working Memory: Features From Multiple Remembered Objects Produce Parallel, Coactive Guidance of Attention in Visual Search
    Bahle, Brett
    Thayer, Daniel D.
    Mordkoff, J. Toby
    Hollingworth, Andrew
    JOURNAL OF EXPERIMENTAL PSYCHOLOGY-GENERAL, 2020, 149 (05) : 967 - 983
  • [48] New method for ship detection in synthetic aperture radar imagery based on the human visual attention system
    Amoon, Mehdi
    Bozorgi, Ahmad
    Rezai-rad, Gholam-ali
    JOURNAL OF APPLIED REMOTE SENSING, 2013, 7
  • [49] A bottom-up and top-down human visual attention approach for hyperspectral anomaly detection
    Taghipour, Ashkan
    Ghassemian, Hassan
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2021, 77
  • [50] IR small target detection based on human visual attention using pulsed discrete cosine transform
    Nasiri, Mahdi
    Mosavi, Mohammad Reza
    Mirzakuchaki, Sattar
    IET IMAGE PROCESSING, 2017, 11 (06) : 397 - 405