SpVOS: Efficient Video Object Segmentation With Triple Sparse Convolution

被引:1
作者
Lin, Weihao [1 ]
Chen, Tao [1 ]
Yu, Chong [2 ]
机构
[1] Fudan Univ, Sch Informat Sci & Technol, Shanghai 200433, Peoples R China
[2] Fudan Univ, Acad Engn & Technol, Shanghai 200433, Peoples R China
基金
中国国家自然科学基金;
关键词
Video object segmentation; convolutional neural networks; sparse convolution; PROPOSAL GENERATION;
D O I
10.1109/TIP.2023.3327588
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Semi-supervised video object segmentation (Semi-VOS), which requires only annotating the first frame of a video to segment future frames, has received increased attention recently. Among existing Semi-VOS pipelines, the memory-matching-based one is becoming the main research stream, as it can fully utilize the temporal sequence information to obtain high-quality segmentation results. Even though this type of method has achieved promising performance, the overall framework still suffers from heavy computation overhead, mainly caused by the per-frame dense convolution operations between high-resolution feature maps and each kernel filter. Therefore, we propose a sparse baseline of VOS named SpVOS in this work, which develops a novel triple sparse convolution to reduce the computation costs of the overall VOS framework. The designed triple gate, taking full consideration of both spatial and temporal redundancy between adjacent video frames, adaptively makes a triple decision to decide how to apply the sparse convolution on each pixel to control the computation overhead of each layer, while maintaining sufficient discrimination capability to distinguish similar objects and avoid error accumulation. A mixed sparse training strategy, coupled with a designed objective considering the sparsity constraint, is also developed to balance the VOS segmentation performance and computation costs. Experiments are conducted on two mainstream VOS datasets, including DAVIS and Youtube-VOS. Results show that, the proposed SpVOS achieves superior performance over other state-of-the-art sparse methods, and even maintains comparable performance, e.g., an 83.04% (79.29%) overall score on the DAVIS-2017 (Youtube-VOS) validation set, with the typical non-sparse VOS baseline (82.88% for DAVIS-2017 and 80.36% for Youtube-VOS) while saving up to 42% FLOPs, showing its application potential for resource-constrained scenarios.
引用
收藏
页码:5977 / 5991
页数:15
相关论文
共 64 条
  • [1] Label Propagation in Video Sequences
    Badrinarayanan, Vijay
    Galasso, Fabio
    Cipolla, Roberto
    [J]. 2010 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2010, : 3265 - 3272
  • [2] Bengio Y, 2013, Arxiv, DOI arXiv:1308.3432
  • [3] Cascade R-CNN: High Quality Object Detection and Instance Segmentation
    Cai, Zhaowei
    Vasconcelos, Nuno
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2021, 43 (05) : 1483 - 1498
  • [4] Chen X, 2020, PROC CVPR IEEE, P9381, DOI 10.1109/CVPR42600.2020.00940
  • [5] Cheng HK, 2021, ADV NEUR IN, V34
  • [6] XMem: Long-Term Video Object Segmentation with an Atkinson-Shiffrin Memory Model
    Cheng, Ho Kei
    Schwing, Alexander G.
    [J]. COMPUTER VISION - ECCV 2022, PT XXVIII, 2022, 13688 : 640 - 658
  • [7] Modular Interactive Video Object Segmentation: Interaction-to-Mask, Propagation and Difference-Aware Fusion
    Cheng, Ho Kei
    Tai, Yu-Wing
    Tang, Chi-Keung
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 5555 - 5564
  • [8] SegFlow: Joint Learning for Video Object Segmentation and Optical Flow
    Cheng, Jingchun
    Tsai, Yi-Hsuan
    Wang, Shengjin
    Yang, Ming-Hsuan
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 686 - 695
  • [9] Cho SH, 2020, Arxiv, DOI arXiv:2009.08855
  • [10] CRVOS: CLUE REFINING NETWORK FOR VIDEO OBJECT SEGMENTATION
    Cho, Suhwan
    Cho, MyeongAh
    Chung, Tae-young
    Lee, Heansung
    Lee, Sangyoun
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2020, : 2301 - 2305