Scalable Video Object Segmentation With Identification Mechanism

被引:7
作者
Yang, Zongxin [1 ]
Miao, Jiaxu [2 ]
Wei, Yunchao [3 ]
Wang, Wenguan [1 ]
Wang, Xiaohan [1 ]
Yang, Yi [1 ]
机构
[1] Zhejiang Univ, ReLER, CCAI, Hangzhou 310027, Peoples R China
[2] Sun Yat Sen Univ, Sch Cyber Sci & Technol, Shenzhen 518063, Peoples R China
[3] Beijing Jiaotong Univ, Inst Informat Sci, Beijing 100044, Peoples R China
基金
中国国家自然科学基金;
关键词
Transformers; Benchmark testing; Object segmentation; Decoding; Object recognition; Scalability; Annotations; Identification mechanism; video object segmentation; vision transformer;
D O I
10.1109/TPAMI.2024.3383592
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper delves into the challenges of achieving scalable and effective multi-object modeling for semi-supervised Video Object Segmentation (VOS). Previous VOS methods decode features with a single positive object, limiting the learning of multi-object representation as they must match and segment each target separately under multi-object scenarios. Additionally, earlier techniques catered to specific application objectives and lacked the flexibility to fulfill different speed-accuracy requirements. To address these problems, we present two innovative approaches, Associating Objects with Transformers (AOT) and Associating Objects with Scalable Transformers (AOST). In pursuing effective multi-object modeling, AOT introduces the IDentification (ID) mechanism to allocate each object a unique identity. This approach enables the network to model the associations among all objects simultaneously, thus facilitating the tracking and segmentation of objects in a single network pass. To address the challenge of inflexible deployment, AOST further integrates scalable long short-term transformers that incorporate scalable supervision and layer-wise ID-based attention. This enables online architecture scalability in VOS for the first time and overcomes ID embeddings' representation limitations. Given the absence of a benchmark for VOS involving densely multi-object annotations, we propose a challenging Video Object Segmentation in the Wild (VOSW) benchmark to validate our approaches. We evaluated various AOT and AOST variants using extensive experiments across VOSW and five commonly used VOS benchmarks, including YouTube-VOS 2018 & 2019 Val, DAVIS-2017 Val & Test, and DAVIS-2016. Our approaches surpass the state-of-the-art competitors and display exceptional efficiency and scalability consistently across all six benchmarks. Moreover, we notably achieved the $\mathbf {1<^>{st}}$1st position in the 3 rd Large-scale Video Object Segmentation Challenge.
引用
收藏
页码:6247 / 6262
页数:16
相关论文
共 107 条
  • [1] ViViT: A Video Vision Transformer
    Arnab, Anurag
    Dehghani, Mostafa
    Heigold, Georg
    Sun, Chen
    Lucic, Mario
    Schmid, Cordelia
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 6816 - 6826
  • [2] Ba J. L., 2016, P INT C NEUR INF PRO
  • [3] Label Propagation in Video Sequences
    Badrinarayanan, Vijay
    Galasso, Fabio
    Cipolla, Roberto
    [J]. 2010 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2010, : 3265 - 3272
  • [4] Bahdanau D, 2016, Arxiv, DOI [arXiv:1409.0473, DOI 10.48550/ARXIV.1409.0473]
  • [5] Bertasius G, 2021, PR MACH LEARN RES, V139
  • [6] Bhat Goutam, 2020, Computer Vision - ECCV 2020. 16th European Conference. Proceedings. Lecture Notes in Computer Science (LNCS 12347), P777, DOI 10.1007/978-3-030-58536-5_46
  • [7] Bolukbasi T, 2017, PR MACH LEARN RES, V70
  • [8] One-Shot Video Object Segmentation
    Caelles, S.
    Maninis, K. -K.
    Pont-Tuset, J.
    Leal-Taixe, L.
    Cremers, D.
    Van Gool, L.
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 5320 - 5329
  • [9] Carion Nicolas, 2020, Computer Vision - ECCV 2020. 16th European Conference. Proceedings. Lecture Notes in Computer Science (LNCS 12346), P213, DOI 10.1007/978-3-030-58452-8_13
  • [10] Video Object Segmentation Via Dense Trajectories
    Chen, Lin
    Shen, Jianbing
    Wang, Wenguan
    Ni, Bingbing
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2015, 17 (12) : 2225 - 2234