Exploring the Better Correlation for Few-Shot Video Object Segmentation

被引:0
作者
Luo, Naisong [1 ]
Wang, Yuan [1 ]
Sun, Rui [1 ]
Xiong, Guoxin [1 ]
Zhang, Tianzhu [1 ,2 ]
Wu, Feng [1 ,2 ]
机构
[1] Univ Sci & Technol China, Sch Informat Sci, Hefei 230027, Peoples R China
[2] Deep Space Explorat Lab, Hefei 230088, Peoples R China
基金
中国国家自然科学基金;
关键词
Few-shot video object segmentation; video object segmentation; few-shot learning;
D O I
10.1109/TCSVT.2024.3491214
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Few-shot video object segmentation (FSVOS) aims to achieve accurate segmentation of novel objects in given video sequences, where the target objects are specified by limited annotated images as support. Most previous top-performing methods adopt the support-query semantic correlation learning paradigm or the intra-query temporal correlation learning paradigm. Nevertheless, they either fail to model temporal consistency across frames, resulting in inconsecutive segmentation, or lose diverse support object information, leading to incomplete segmentation. Therefore, we argue that it is more desirable to achieve both correlations in a collaborative manner. In this work, we delve into the issues present in the combination of few-shot image segmentation methods and video object segmentation methods and propose a dedicated Collaborative Correlation Network (CoCoNet) to address these problems, including a pixel correlation calibration module and a temporal correlation mining module. The proposed CoCoNet enjoys several merits. First, the pixel correlation calibration module aims to mitigate the noise issue in support-query correlation by integrating the affinity learning strategy and the prototype learning strategy. Specifically, we employ Optimal Transport to enrich pixel correlation with contextual information, thereby reducing intra-class differences between support and query. Second, the temporal correlation mining module is responsible for alleviating the issue of uncertainty in the initial frame and establishing reliable guidance for subsequent frames of the query video. With the collaboration of these two modules, our CoCoNet can effectively establish support-query and temporal correlation simultaneously and achieve accurate FSVOS. Extensive experimental results on two challenging benchmarks demonstrate that our method performs favorably against state-of-the-art FSVOS methods.
引用
收藏
页码:2133 / 2146
页数:14
相关论文
共 66 条
  • [1] Few-Shot Segmentation Without Meta-Learning: A Good Transductive Inference Is All You Need?
    Boudiaf, Malik
    Kervadec, Hoel
    Masud, Ziko Imtiaz
    Piantanida, Pablo
    Ben Ayed, Ismail
    Dolz, Jose
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 13974 - 13983
  • [2] Boyu Yang, 2020, Computer Vision - ECCV 2020. 16th European Conference. Proceedings. Lecture Notes in Computer Science (LNCS 12353), P763, DOI 10.1007/978-3-030-58598-3_45
  • [3] One-Shot Video Object Segmentation
    Caelles, S.
    Maninis, K. -K.
    Pont-Tuset, J.
    Leal-Taixe, L.
    Cremers, D.
    Van Gool, L.
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 5320 - 5329
  • [4] Break the Bias: Delving Semantic Transform Invariance for Few-Shot Segmentation
    Cao, Qinglong
    Chen, Yuntian
    Ma, Chao
    Yang, Xiaokang
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (05) : 3971 - 3982
  • [5] Caron M, 2020, ADV NEUR IN, V33
  • [6] Delving Deep into Many-to-many Attention for Few-shot Video Object Segmentation
    Chen, Haoxin
    Wu, Hanjie
    Zhao, Nanxuan
    Ren, Sucheng
    He, Shengfeng
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 14035 - 14044
  • [7] Boosting Video Object Segmentation via Robust and Efficient Memory Network
    Chen, Yadang
    Zhang, Dingwei
    Zheng, Yuhui
    Yang, Zhi-Xin
    Wu, Enhua
    Zhao, Haixing
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (05) : 3340 - 3352
  • [8] Cheng H.K., 2021, Adv. Neural. Inf. Process. Syst., V34, P11781
  • [9] XMem: Long-Term Video Object Segmentation with an Atkinson-Shiffrin Memory Model
    Cheng, Ho Kei
    Schwing, Alexander G.
    [J]. COMPUTER VISION - ECCV 2022, PT XXVIII, 2022, 13688 : 640 - 658
  • [10] Cuturi M., 2013, ADV NEURAL INFORM PR, P2292, DOI DOI 10.48550/ARXIV.1306.0895