Visual saliency detection via integrating bottom-up and top-down information

被引:11
|
作者
Shariatmadar, Zahra Sadat [1 ]
Faez, Karim [1 ]
机构
[1] Amirkabir Univ Technol, Elect Engn Dept, Tehran 15914, Iran
来源
OPTIK | 2019年 / 178卷
关键词
Phase congruency; Bottom-up and top-down attention; Visual saliency; Object detection; Human visual system; EYE-MOVEMENTS; OBJECT DETECTION; MODEL; ALLOCATION; ATTENTION; GUIDANCE; SEARCH;
D O I
10.1016/j.ijleo.2018.10.096
中图分类号
O43 [光学];
学科分类号
070207 ; 0803 ;
摘要
Selective attention is a process that enables biological and artificial systems to remove the redundant information and highlight the valuable regions in an image. The relevant information is determined by task driven (Top-Down (TD)) or task-independent (Bottom-Up (BU)) factors. In this paper, we present a new computational visual saliency model which uses the combination of BU and TD mechanism for extracting the relevant regions of images with man-made objects. The prior knowledge about man-made objects is the compactness and higher values of different orientations. So, by using maximum and minimum moments of phase congruency covariance and different orientations from Gabor filters, we obtain different feature maps from two mentioned attention mechanisms. Finally these maps are linearly combined which their coefficients are obtained by using the entropy of each feature map. Three region-based databases were used to examine the performance of the proposed method. The experimental results demonstrated the efficiency and effectiveness of this new visual saliency model.
引用
收藏
页码:1195 / 1207
页数:13
相关论文
共 50 条
  • [41] A simple saliency detection approach via automatic top-down feature fusion
    Qiu, Yu
    Liu, Yun
    Yang, Hui
    Xu, Jing
    NEUROCOMPUTING, 2020, 388 : 124 - 134
  • [42] Selection of a best metric and evaluation of bottom-up visual saliency models
    Emami, Mohsen
    Hoberock, Lawrence L.
    IMAGE AND VISION COMPUTING, 2013, 31 (10) : 796 - 808
  • [43] Top-Down Saliency Detection via Contextual Pooling
    Zhu, Jun
    Qiu, Yuanyuan
    Zhang, Rui
    Huang, Jun
    Zhang, Wenjun
    JOURNAL OF SIGNAL PROCESSING SYSTEMS FOR SIGNAL IMAGE AND VIDEO TECHNOLOGY, 2014, 74 (01): : 33 - 46
  • [44] Autonomous Behavior-Based Switched Top-Down and Bottom-Up Visual Attention for Mobile Robots
    Xu, Tingting
    Kuehnlenz, Kolja
    Buss, Martin
    IEEE TRANSACTIONS ON ROBOTICS, 2010, 26 (05) : 947 - 954
  • [45] What is bottom-up and what is top-down in predictive coding?
    Rauss, Karsten
    Pourtois, Gilles
    FRONTIERS IN PSYCHOLOGY, 2013, 4
  • [46] Bottom-Up and Top-Down Visuomotor Responses to Action Observation
    Ubaldi, Silvia
    Barchiesi, Guido
    Cattaneo, Luigi
    CEREBRAL CORTEX, 2015, 25 (04) : 1032 - 1041
  • [47] Bottom-up and top-down modulation of route selection in imitation
    Tessari, Alessia
    Proietti, Riccardo
    Rumiati, Raffaella, I
    COGNITIVE NEUROPSYCHOLOGY, 2021, 38 (7-8) : 515 - 530
  • [48] Top-down and bottom-up neurodynamic evidence in patients with tinnitus
    Hong, Sung Kwang
    Park, Sejik
    Ahn, Min-Hee
    Min, Byoung-Kyong
    HEARING RESEARCH, 2016, 342 : 86 - 100
  • [49] Temporal learning of bottom-up connections via spatially nonspecific top-down inputs
    Lee, Jung Hoon
    Kim, Mean-Hwan
    Vijayan, Sujith
    NEUROCOMPUTING, 2020, 411 : 128 - 138
  • [50] A Combing Top-Down and Bottom-Up Discriminative Dictionaries Learning for Non-specific Object Detection
    Xie, Yurui
    Wu, Qingbo
    Luo, Bing
    Huang, Chao
    Tang, Liangzhi
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2014, E97D (05): : 1367 - 1370