Isophote Based Center-Surround Contrast Computation for Image Saliency Detection

被引:1
作者
Chuang, Yuelong [1 ]
Chem, Ling [1 ]
Chen, Gencai [1 ]
Woodward, John [2 ]
机构
[1] Zhejiang Univ, Coll Comp Sci, Hangzhou, Zhejiang, Peoples R China
[2] Univ Stirling, Sch Nat Sci, Stirling FK9 4LA, Scotland
来源
IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS | 2014年 / E97D卷 / 01期
关键词
image saliency; isophote; center-surround contrast; VISUAL-ATTENTION;
D O I
10.1587/transinf.E97.D.160
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, we introduce a biologically-motivated model to detect image saliency. The model employs an isophote based operator to detect potential structure and global saliency information related to each pixel, which are then combined with integral image to build up final saliency maps. We show that the proposed model outperforms seven state-of-the-art saliency detectors in experimental studies.
引用
收藏
页码:160 / 163
页数:4
相关论文
共 12 条
  • [1] Achanta R, 2008, LECT NOTES COMPUT SC, V5008, P66
  • [2] [Anonymous], 2007, PROC IEEE C COMPUT V, DOI 10.1109/CVPR.2007.383267
  • [3] [Anonymous], IEEE COMPUTER VISION
  • [4] Efficient graph-based image segmentation
    Felzenszwalb, PF
    Huttenlocher, DP
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2004, 59 (02) : 167 - 181
  • [5] FRINTROP S, 2007, P ICVS
  • [6] Harel J., 2006, Graph-Based Visual Saliency, V19, DOI DOI 10.7551/MITPRESS/7503.003.0073
  • [7] A model of saliency-based visual attention for rapid scene analysis
    Itti, L
    Koch, C
    Niebur, E
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 1998, 20 (11) : 1254 - 1259
  • [8] Computational Models of Human Visual Attention and Their Implementations: A Survey
    Kimura, Akisato
    Yonetani, Ryo
    Hirayama, Takatsugu
    [J]. IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2013, E96D (03) : 562 - 578
  • [9] Klein Dominik A., 2011, IEEE International Conference on Robotics and Automation, P4411
  • [10] Ma Y.-F., 2003, Proceedings of the eleventh ACM international conference on Multimedia, P374, DOI [DOI 10.1145/957013.957094, 10.1145/957013.957094]