Video Saliency Map Detection by Dominant Camera Motion Removal

被引:41
作者
Huang, Chun-Rong [1 ]
Chang, Yun-Jung [1 ,2 ]
Yang, Zhi-Xiang [1 ,2 ]
Lin, Yen-Yu [2 ]
机构
[1] Natl Chung Hsing Univ, Dept Comp Sci & Engn, Taichung 402, Taiwan
[2] Acad Sinica, Res Ctr Informat Technol Innovat, Taipei 11529, Taiwan
关键词
One-class SVM (OCSVM); trajectory; video saliency map; VISUAL-ATTENTION; MODEL; CONTRAST; IMAGE; RECOGNITION;
D O I
10.1109/TCSVT.2014.2308652
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
We present a trajectory-based approach to detect salient regions in videos by dominant camera motion removal. Our approach is designed in a general way so that it can be applied to videos taken by either stationary or moving cameras without any prior information. Moreover, multiple salient regions of different temporal lengths can also be detected. To this end, we extract a set of spatially and temporally coherent trajectories of keypoints in a video. Then, velocity and acceleration entropies are proposed to represent the trajectories. In this way, long-term object motions are exploited to filter out short-term noises, and object motions of various temporal lengths can be represented in the same way. On the other hand, we are inspired by the observation that the trajectories in backgrounds, i.e., the nonsalient trajectories, are usually consistent with the dominant camera motion no matter whether the camera is stationary or not. We make use of this property to develop a unified approach to saliency generation for both stationary and moving cameras. Specifically, one-class SVM is employed to remove the consistent trajectories in motion. It follows that the salient regions could be highlighted by applying a diffusion process to the remaining trajectories. In addition, we create a set of manually annotated ground truth on the collected videos. The annotated videos are then used for performance evaluation and comparison. The promising results on various types of videos demonstrate the effectiveness and great applicability of our approach.
引用
收藏
页码:1336 / 1349
页数:14
相关论文
共 50 条
  • [1] VIDEO SALIENCY MAP DETECTION BASED ON GLOBAL MOTION ESTIMATION
    Xu, Jun
    Tu, Qin
    Li, Cuiwei
    Gao, Ran
    Men, Aidong
    2015 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA & EXPO WORKSHOPS (ICMEW), 2015,
  • [2] Motion-Aware Rapid Video Saliency Detection
    Guo, Fang
    Wang, Wenguan
    Shen, Ziyi
    Shen, Jianbing
    Shao, Ling
    Tao, Dacheng
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2020, 30 (12) : 4887 - 4898
  • [3] Video Saliency Detection Based on Boolean Map Theory
    Kalboussi, Rahma
    Abdellaoui, Mehrez
    Douik, Ali
    IMAGE ANALYSIS AND PROCESSING,(ICIAP 2017), PT I, 2017, 10484 : 119 - 128
  • [4] Video saliency detection by gestalt theory
    Fang, Yuming
    Zhang, Xiaoqiang
    Yuan, Feiniu
    Imamoglu, Nevrez
    Liu, Haiwen
    PATTERN RECOGNITION, 2019, 96
  • [5] Fusion hierarchy motion feature for video saliency detection
    Xiao, Fen
    Luo, Huiyu
    Zhang, Wenlei
    Li, Zhen
    Gao, Xieping
    MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (11) : 32301 - 32320
  • [6] A NOVEL VIDEO SALIENCY MAP DETECTION MODEL IN COMPRESSED DOMAIN
    Xu, Jun
    Guo, Xiaoqiang
    Tu, Qin
    Li, Cuiwei
    Men, Aidong
    2015 IEEE MILITARY COMMUNICATIONS CONFERENCE (MILCOM 2015), 2015, : 157 - 162
  • [7] Pattern mining based video saliency detection
    Ramadan, Hiba
    Tairi, Hamid
    2017 INTELLIGENT SYSTEMS AND COMPUTER VISION (ISCV), 2017,
  • [8] Video Saliency Detection via Graph Clustering With Motion Energy and Spatiotemporal Objectness
    Xu, Mingzhu
    Liu, Bing
    Fu, Ping
    Li, Junbao
    Hu, Yu Hen
    IEEE TRANSACTIONS ON MULTIMEDIA, 2019, 21 (11) : 2790 - 2805
  • [9] Motion saliency detection based on temporal difference
    Wang, Zhihu
    Xiong, Jiulong
    Zhang, Qi
    JOURNAL OF ELECTRONIC IMAGING, 2015, 24 (03)
  • [10] Exploiting Surroundedness for Saliency Detection: A Boolean Map Approach
    Zhang, Jianming
    Sclaroff, Stan
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2016, 38 (05) : 889 - 902