Extracting salient region for pornographic image detection

被引:15
|
作者
Yan, Chenggang Clarence [1 ]
Liu, Yizhi [2 ]
Xie, Hongtao [3 ,4 ]
Liao, Zhuhua [2 ]
Yin, Jian
机构
[1] Chinese Acad Sci, Inst Comp Technol, Beijing, Peoples R China
[2] Hunan Univ Sci & Technol, Sch Comp Sci & Engn, Xiangtan, Peoples R China
[3] Chinese Acad Sci, Inst Informat Engn, Natl Engn Lab Informat Secur Technol, Beijing, Peoples R China
[4] Shandong Univ, Dept Comp, Weihai, Peoples R China
关键词
Salient region detection; Pornographic image detection; Visual attention analysis; Region-of-interest (ROI); Skin-color model; Bag-of-visual-words (BoVW); Codebook algorithm; Speed up robust features (SURF); RETRIEVAL; MODEL;
D O I
10.1016/j.jvcir.2014.03.005
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Content-based pornographic image detection, in which region-of-interest (ROI) plays an important role, is effective to filter pornography. Traditionally, skin-color regions are extracted as ROI. However, skin-color regions are always larger than the subareas containing pornographic parts, and the approach is difficult to differentiate between human skins and other objects with the skin-colors. In this paper, a novel approach of extracting salient region is presented for pornographic image detection. At first, a novel saliency map model is constructed. Then it is integrated with a skin-color model and a face detection model to capture ROI in pornographic images. Next, a ROI-based codebook algorithm is proposed to enhance the representative power of visual-words. Taking into account both the speed and the accuracy, we fuse speed up robust features (SURF) with color moments (CM). Experimental results show that the precision of our ROI extraction method averagely achieves 91.33%, more precisely than that of using the skin-color model alone. Besides, the comparison with the state-of-the-art methods of pornographic image detection shows that our approach is able to remarkably improve the performance. (c) 2014 Elsevier Inc. All rights reserved.
引用
收藏
页码:1130 / 1135
页数:6
相关论文
共 50 条
  • [21] Foreground and Background Propagation based Salient Region Detection
    Zhou, Li
    Chen, Yujin
    Yang, Zhaohui
    PROCEEDINGS OF 2016 9TH INTERNATIONAL SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE AND DESIGN (ISCID), VOL 1, 2016, : 109 - 112
  • [22] Exploiting Surroundedness and Superpixel cues for salient region detection
    Jiang, Yifeng
    Chang, Shan
    Zheng, Enxing
    Hu, Linna
    Liu, Ranran
    MULTIMEDIA TOOLS AND APPLICATIONS, 2020, 79 (15-16) : 10935 - 10951
  • [23] Salient Region Detection for Object Tracking
    Chan, Fan
    Jiang, Min
    Tang, Jinshan
    MOBILE MULTIMEDIA/IMAGE PROCESSING, SECURITY, AND APPLICATIONS 2012, 2012, 8406
  • [24] Image Location Estimation by Salient Region Matching
    Qian, Xueming
    Zhao, Yisi
    Han, Junwei
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2015, 24 (11) : 4348 - 4358
  • [25] Salient Region Detection by Fusing Foreground and Background Cues Extracted from Single Image
    Zhou, Qiangqiang
    Zhao, Weidong
    Zhang, Lin
    Wang, Zhicheng
    MATHEMATICAL PROBLEMS IN ENGINEERING, 2016, 2016
  • [26] Salient Region Detection by Fusing Bottom-Up and Top-Down Features Extracted From a Single Image
    Tian, Huawei
    Fang, Yuming
    Zhao, Yao
    Lin, Weisi
    Ni, Rongrong
    Zhu, Zhenfeng
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2014, 23 (10) : 4389 - 4398
  • [27] Salient Region Detection by Region Color Contrast and Connectivity Prior
    Chen, Mei-Huan
    Dou, Yan
    Zhang, Shi-Hui
    COMPUTER VISION, CCCV 2015, PT II, 2015, 547 : 21 - 30
  • [28] SALIENT REGION DETECTION USING BACKGROUND CONTRAST
    Zhang, Yanbang
    Han, Junwei
    Guo, Lei
    2014 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2014, : 1184 - 1188
  • [29] Interpolation-tuned salient region detection
    LIU Yang
    LI XueQing
    WANG Lei
    NIU YuZhen
    ScienceChina(InformationSciences), 2014, 57 (01) : 51 - 59
  • [30] Recurrent learning of context for salient region detection
    Wu, Chunling
    PERSONAL AND UBIQUITOUS COMPUTING, 2018, 22 (5-6) : 1017 - 1027