Exploring Rich and Efficient Spatial Temporal Interactions for Real-Time Video Salient Object Detection

被引:88
作者
Chen, Chenglizhao [1 ]
Wang, Guotao [1 ]
Peng, Chong [1 ]
Fang, Yuming [2 ]
Zhang, Dingwen [3 ]
Qin, Hong [4 ]
机构
[1] Qingdao Univ, Coll Comp Sci & Technol, Qingdao 266000, Shandong, Peoples R China
[2] Jiangxi Univ Finance & Econ, Nanchang 330013, Jiangxi, Peoples R China
[3] Xidian Univ, Xian 710129, Peoples R China
[4] SUNY Stony Brook, Stony Brook, NY 11794 USA
基金
中国国家自然科学基金; 美国国家科学基金会;
关键词
Three-dimensional displays; Spatiotemporal phenomena; Convolution; Optical sensors; Decoding; Optical network units; Optical imaging; Video salient object detection; lightweight temporal unit; fast temporal shuffle; multiscale spatiotemporal deep features; OPTIMIZATION; IMAGE;
D O I
10.1109/TIP.2021.3068644
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We have witnessed a growing interest in video salient object detection (VSOD) techniques in today's computer vision applications. In contrast with temporal information (which is still considered a rather unstable source thus far), the spatial information is more stable and ubiquitous, thus it could influence our vision system more. As a result, the current main-stream VSOD approaches have inferred and obtained their saliency primarily from the spatial perspective, still treating temporal information as subordinate. Although the aforementioned methodology of focusing on the spatial aspect is effective in achieving a numeric performance gain, it still has two critical limitations. First, to ensure the dominance by the spatial information, its temporal counterpart remains inadequately used, though in some complex video scenes, the temporal information may represent the only reliable data source, which is critical to derive the correct VSOD. Second, both spatial and temporal saliency cues are often computed independently in advance and then integrated later on, while the interactions between them are omitted completely, resulting in saliency cues with limited quality. To combat these challenges, this paper advocates a novel spatiotemporal network, where the key innovation is the design of its temporal unit. Compared with other existing competitors (e.g., convLSTM), the proposed temporal unit exhibits an extremely lightweight design that does not degrade its strong ability to sense temporal information. Furthermore, it fully enables the computation of temporal saliency cues that interact with their spatial counterparts, ultimately boosting the overall VSOD performance and realizing its full potential towards mutual performance improvement for each. The proposed method is easy to implement yet still effective, achieving high-quality VSOD at 50 FPS in real-time applications.
引用
收藏
页码:3995 / 4007
页数:13
相关论文
共 53 条
  • [1] Achanta R, 2009, PROC CVPR IEEE, P1597, DOI 10.1109/CVPRW.2009.5206596
  • [2] Improved Saliency Detection in RGB-D Images Using Two-Phase Depth Estimation and Selective Deep Fusion
    Chen, Chenglizhao
    Wei, Jipeng
    Peng, Chong
    Zhang, Weizhong
    Qin, Hong
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 : 4296 - 4307
  • [3] Improved Robust Video Saliency Detection Based on Long-Term Spatial-Temporal Information
    Chen, Chenglizhao
    Wang, Guotao
    Peng, Chong
    Zhang, Xiaowei
    Qin, Hong
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 (29) : 1090 - 1100
  • [4] Bilevel Feature Learning for Video Saliency Detection
    Chen, Chenglizhao
    Li, Shuai
    Qin, Hong
    Pan, Zhenkuan
    Yang, Guowei
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2018, 20 (12) : 3324 - 3336
  • [5] A Novel Bottom-Up Saliency Detection Method for Video With Dynamic Background
    Chen, Chenglizhao
    Li, Yunxiao
    Li, Shuai
    Qin, Hong
    Hao, Aimin
    [J]. IEEE SIGNAL PROCESSING LETTERS, 2018, 25 (02) : 154 - 158
  • [6] Video Saliency Detection via Spatial-Temporal Fusion and Low-Rank Coherency Diffusion
    Chen, Chenglizhao
    Li, Shuai
    Wang, Yongguang
    Qin, Hong
    Hao, Aimin
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2017, 26 (07) : 3156 - 3170
  • [7] Robust salient motion detection in non-stationary videos via novel integrated strategies of spatio-temporal coherency clues and low-rank analysis
    Chen, Chenglizhao
    Li, Shuai
    Qin, Hong
    Hao, Aimin
    [J]. PATTERN RECOGNITION, 2016, 52 : 410 - 432
  • [8] Structure-Sensitive Saliency Detection via Multilevel Rank Analysis in Intrinsic Feature Space
    Chen, Chenglizhao
    Li, Shuai
    Qin, Hong
    Hao, Aimin
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2015, 24 (08) : 2303 - 2316
  • [9] SCOM: Spatiotemporal Constrained Optimization for Salient Object Detection
    Chen, Yuhuan
    Zou, Wenbin
    Tang, Yi
    Li, Xia
    Xu, Chen
    Komodakis, Nikos
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2018, 27 (07) : 3345 - 3357
  • [10] Global Contrast based Salient Region Detection
    Cheng, Ming-Ming
    Zhang, Guo-Xin
    Mitra, Niloy J.
    Huang, Xiaolei
    Hu, Shi-Min
    [J]. 2011 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2011, : 409 - 416