Saliency Detection for Unconstrained Videos Using Superpixel-Level Graph and Spatiotemporal Propagation

被引:146
|
作者
Liu, Zhi [1 ]
Li, Junhao [1 ]
Ye, Linwei [1 ]
Sun, Guangling [1 ]
Shen, Liquan [1 ]
机构
[1] Shanghai Univ, Sch Commun & Informat Engn, Shanghai 200444, Peoples R China
基金
中国国家自然科学基金;
关键词
Motion saliency (MS); saliency model; spatial propagation; spatiotemporal saliency detection; superpixel-level graph; temporal propagation; unconstrained video; VISUAL-ATTENTION; MOTION DETECTION; DETECTION MODEL; SEGMENTATION; IMAGE; MAXIMIZATION; CONTRAST; TRACKING;
D O I
10.1109/TCSVT.2016.2595324
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
This paper proposes an effective spatiotemporal saliency model for unconstrained videos with complicated motion and complex scenes. First, superpixel-level motion and color histograms as well as global motion histogram are extracted as the features for saliency measurement. Then a superpixel-level graph with the addition of a virtual background node representing the global motion is constructed, and an iterative motion saliency (MS) measurement method that utilizes the shortest path algorithm on the graph is exploited to reasonably generate MS maps. Temporal propagation of saliency in both forward and backward directions is performed using efficient operations on inter-frame similarity matrices to obtain the integrated temporal saliency maps with the better coherence. Finally, spatial propagation of saliency both locally and globally is performed via the use of intra-frame similarity matrices to obtain the spatiotemporal saliency maps with the even better quality. The experimental results on two video data sets with various unconstrained videos demonstrate that the proposed model consistently outperforms the state-of-the-art spatiotemporal saliency models on saliency detection performance.
引用
收藏
页码:2527 / 2542
页数:16
相关论文
共 27 条
  • [1] Spatiotemporal saliency detection based on superpixel-level trajectory
    Li, Junhao
    Liu, Zhi
    Zhang, Xiang
    Le Meur, Olivier
    Shen, Liquan
    SIGNAL PROCESSING-IMAGE COMMUNICATION, 2015, 38 : 100 - 114
  • [2] Superpixel-Based Spatiotemporal Saliency Detection
    Liu, Zhi
    Zhang, Xiang
    Luo, Shuhua
    Le Meur, Olivier
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2014, 24 (09) : 1522 - 1540
  • [3] GRAPH-BASED SALIENCY FUSION WITH SUPERPIXEL-LEVEL BELIEF PROPAGATION FOR 3D FIXATION PREDICTION
    Li, Bei
    Liu, Qiong
    Shi, Xiang
    Yang, You
    2018 25TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2018, : 2321 - 2325
  • [4] Video saliency detection via bagging-based prediction and spatiotemporal propagation
    Zhou, Xiaofei
    Liu, Zhi
    Li, Kai
    Sun, Guangling
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2018, 51 : 131 - 143
  • [5] Spatiotemporal saliency detection and salient region determination for H.264 videos
    Hu, Kang-Ting
    Leou, Jin-Jang
    Hsiao, Han-Hui
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2013, 24 (07) : 760 - 772
  • [6] PolSAR Ship Detection Based on Superpixel-Level Scattering Mechanism Distribution Features
    Wang, Yinghua
    Liu, Hongwei
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2015, 12 (08) : 1780 - 1784
  • [7] Spatiotemporal Saliency Detection Using Textural Contrast and Its Applications
    Kim, Wonjun
    Kim, Changick
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2014, 24 (04) : 646 - 659
  • [8] Spatiotemporal saliency detection using border connectivity
    Wu, Tongbao
    Liu, Zhi
    Li, Junhao
    EIGHTH INTERNATIONAL CONFERENCE ON DIGITAL IMAGE PROCESSING (ICDIP 2016), 2016, 10033
  • [9] Discovering salient objects from videos using spatiotemporal salient region detection
    Kannan, Rajkumar
    Ghinea, Gheorghita
    Swaminathan, Sridhar
    SIGNAL PROCESSING-IMAGE COMMUNICATION, 2015, 36 : 154 - 178
  • [10] Spatiotemporal saliency model for small moving object detection in infrared videos
    Wang, Xin
    Ning, Chen
    Xu, Lizhong
    INFRARED PHYSICS & TECHNOLOGY, 2015, 69 : 111 - 117