Discovering salient objects from videos using spatiotemporal salient region detection

被引:4
作者
Kannan, Rajkumar [1 ]
Ghinea, Gheorghita [2 ]
Swaminathan, Sridhar [3 ]
机构
[1] King Faisal Univ, Coll Comp Sci & Informat Technol, Al Hasa 31982, Saudi Arabia
[2] Brunel Univ, Uxbridge UB8 3PH, Middx, England
[3] Bishop Heber Coll Autonomous, Tiruchirappalli 620017, Tamil Nadu, India
关键词
Salient region detection; Temporal saliency; Optical flow abstraction; Spatiotemporal saliency detection; Saliency map; VISUAL-ATTENTION; MODEL; COLOR; IMAGE; CONTRAST;
D O I
10.1016/j.image.2015.07.004
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Detecting salient objects from images and videos has many useful applications in computer vision. In this paper, a novel spatiotemporal salient region detection approach is proposed. The proposed approach computes spatiotemporal saliency by estimating spatial and temporal saliencies separately. The spatial saliency of an image is computed by estimating the color contrast cue and color distribution cue. The estimations of these cues exploit the patch level and region level image abstractions in a unified way. The aforementioned cues are fused to compute an initial spatial saliency map, which is further refined to emphasize saliencies of objects uniformly, and to suppress saliencies of background noises. The final spatial saliency map is computed by integrating the refined saliency map with center prior map. The temporal saliency is computed based on local and global temporal saliencies estimations using patch level optical flow abstractions. Both local and global temporal saliencies are fused to compute the temporal saliency. Finally, spatial and temporal saliencies are integrated to generate a spatiotemporal saliency map. The proposed temporal and spatiotemporal salient region detection approaches are extensively experimented on challenging salient object detection video datasets. The experimental results show that the proposed approaches achieve an improved performance than several state-of-the-art saliency detection approaches. In order to compensate different needs in respect of the speed/accuracy tradeoff, faster variants of the spatial, temporal and spatiotemporal salient region detection approaches are also presented in this paper. (C) 2015 Elsevier B.V. All rights reserved.
引用
收藏
页码:154 / 178
页数:25
相关论文
共 63 条
  • [31] Submodular Salient Region Detection
    Jiang, Zhuolin
    Davis, Larry S.
    [J]. 2013 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2013, : 2043 - 2050
  • [32] A Unified Spectral-Domain Approach for Saliency Detection and Its Application to Automatic Object Segmentation
    Jung, Chanho
    Kim, Changick
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2012, 21 (03) : 1272 - 1283
  • [33] Salient Region Detection Using Patch Level and Region Level Image Abstractions
    Kannan, Rajkumar
    Ghinea, Gheorghita
    Swaminathan, Sridhar
    [J]. IEEE SIGNAL PROCESSING LETTERS, 2015, 22 (06) : 686 - 690
  • [34] Spatiotemporal Saliency Detection and Its Applications in Static and Dynamic Scenes
    Kim, Wonjun
    Jung, Chanho
    Kim, Changick
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2011, 21 (04) : 446 - 456
  • [35] KOCH C, 1985, HUM NEUROBIOL, V4, P219
  • [36] Liao S., 2010, Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, P1301, DOI [DOI 10.1109/CVPR.2010.5539817, 10.1109/CVPR.2010.5539817]
  • [37] Liu C., 2009, PIXELS EXPLORING NEW
  • [38] Learning to Detect a Salient Object
    Liu, Tie
    Yuan, Zejian
    Sun, Jian
    Wang, Jingdong
    Zheng, Nanning
    Tang, Xiaoou
    Shum, Heung-Yeung
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2011, 33 (02) : 353 - 367
  • [39] Ma Y.-F., 2003, Proceedings of the eleventh ACM international conference on Multimedia, P374, DOI DOI 10.1145/957013.957094
  • [40] Spatiotemporal Saliency in Dynamic Scenes
    Mahadevan, Vijay
    Vasconcelos, Nuno
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2010, 32 (01) : 171 - 177