Accurate and Robust Video Saliency Detection via Self-Paced Diffusion

被引:43
|
作者
Li, Yunxiao [1 ]
Li, Shuai [1 ]
Chen, Chenglizhao [1 ,2 ]
Hao, Aimin [1 ]
Qin, Hong [3 ]
机构
[1] Beihang Univ, State Key Lab Virtual Real Technol & Syst, Beijing 100191, Peoples R China
[2] Qingdao Univ, Qingdao 266071, Peoples R China
[3] SUNY Stony Brook, Stony Brook, NY 11794 USA
基金
中国国家自然科学基金; 美国国家科学基金会;
关键词
Saliency detection; Proposals; Video sequences; Spatial coherence; Computational modeling; Optical imaging; Optical sensors; Video saliency detection; long-term saliency revealing; key frame strategy; self-paced saliency diffusion; OBJECT DETECTION; SEGMENTATION; OPTIMIZATION; FUSION;
D O I
10.1109/TMM.2019.2940851
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Conventional video saliency detection methods frequently follow the common bottom-up thread to estimate video saliency within the short-term fashion. As a result, such methods can not avoid the obstinate accumulation of errors when the collected low-level clues are constantly ill-detected. Also, being noticed that a portion of video frames, which are not nearby the current video frame over the time axis, may potentially benefit the saliency detection in the current video frame. Thus, we propose to solve the aforementioned problem using our newly-designed key frame strategy (KFS), whose core rationale is to utilize both the spatial-temporal coherency of the salient foregrounds and the objectness prior (i.e., how likely it is for an object proposal to contain an object of any class) to reveal the valuable long-term information. We could utilize all this newly-revealed long-term information to guide our subsequent "self-paced" saliency diffusion, which enables each key frame itself to determine its diffusion range and diffusion strength to correct those ill-detected video frames. At the algorithmic level, we first divide a video sequence into short-term frame batches, and the object proposals are obtained in a frame-wise manner. Then, for each object proposal, we utilize a pre-trained deep saliency model to obtain high-dimensional features in order to represent the spatial contrast. Since the contrast computation within multiple neighbored video frames (i.e., the non-local manner) is relatively insensitive to the appearance variation, those object proposals with high-quality low-level saliency estimation frequently exhibit strong similarity over the temporal scale. Next, the long-term common consistency (e.g., appearance models/movement patterns) of the salient foregrounds could be explicitly revealed via similarity analysis accordingly. We further boost the detection accuracy via long-term information guided saliency diffusion in a self-paced manner. We have conducted extensive experiments to compare our method with 16 state-of-the-art methods over 4 largest public available benchmarks, and all results demonstrate the superiority of our method in terms of both accuracy and robustness.
引用
收藏
页码:1153 / 1167
页数:15
相关论文
共 50 条
  • [1] Co-Saliency Detection via a Self-Paced Multiple-Instance Learning Framework
    Zhang, Dingwen
    Meng, Deyu
    Han, Junwei
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2017, 39 (05) : 865 - 878
  • [2] End-to-End Video Saliency Detection via a Deep Contextual Spatiotemporal Network
    Wei, Lina
    Zhao, Shanshan
    Bourahla, Omar Farouk
    Li, Xi
    Wu, Fei
    Zhuang, Yueting
    Han, Junwei
    Xu, Mingliang
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (04) : 1691 - 1702
  • [3] Improved Robust Video Saliency Detection Based on Long-Term Spatial-Temporal Information
    Chen, Chenglizhao
    Wang, Guotao
    Peng, Chong
    Zhang, Xiaowei
    Qin, Hong
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 (29) : 1090 - 1100
  • [4] A Self-paced Multiple-instance Learning Framework for Co-saliency Detection
    Zhang, Dingwen
    Meng, Deyu
    Li, Chao
    Jiang, Lu
    Zhao, Qian
    Han, Junwei
    2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 594 - 602
  • [5] Video Saliency Detection via Sparsity-Based Reconstruction and Propagation
    Cong, Runmin
    Lei, Jianjun
    Fu, Huazhu
    Porikli, Fatih
    Huang, Qingming
    Hou, Chunping
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2019, 28 (10) : 4819 - 4831
  • [6] Scene Prior Constrained Self-Paced Learning for Unsupervised Satellite Video Vehicle Detection
    Liang, Yuping
    Shi, Guangming
    Wu, Jinjian
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2025, 35 (03) : 2315 - 2327
  • [7] Structure-Aware Adaptive Diffusion for Video Saliency Detection
    Chen, Chenglizhao
    Wang, Guotao
    Peng, Chong
    IEEE ACCESS, 2019, 7 : 79770 - 79782
  • [8] Spatially Self-Paced Convolutional Networks for Change Detection in Heterogeneous Images
    Li, Hao
    Gong, Maoguo
    Zhang, Mingyang
    Wu, Yue
    IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2021, 14 : 4966 - 4979
  • [9] Saliency detection via a multi-layer graph based diffusion model
    Jiang, Bo
    He, Zhouqin
    Ding, Chris
    Luo, Bin
    NEUROCOMPUTING, 2018, 314 : 215 - 223
  • [10] Fast Video Saliency Detection via Maximally Stable Region Motion and Object Repeatability
    Huang, Xiaoming
    Zhang, Yu-Jin
    IEEE TRANSACTIONS ON MULTIMEDIA, 2022, 24 : 4458 - 4470