Joint Inductive and Transductive Learning for Video Object Segmentation

被引:64
作者
Mao, Yunyao [1 ]
Wang, Ning [1 ]
Zhou, Wengang [1 ,2 ]
Li, Houqiang [1 ,2 ]
机构
[1] Univ Sci & Technol China, EEIS Dept, CAS Key Lab Technol GIPAS, Hefei, Anhui, Peoples R China
[2] Hefei Comprehens Natl Sci Ctr, Inst Artificial Intelligence, Hefei, Anhui, Peoples R China
来源
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021) | 2021年
基金
中国国家自然科学基金;
关键词
D O I
10.1109/ICCV48922.2021.00953
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Semi-supervised video object segmentation is a task of segmenting the target object in a video sequence given only a mask annotation in the first frame. The limited information available makes it an extremely challenging task. Most previous best-performing methods adopt matching-based transductive reasoning or online inductive learning. Nevertheless, they are either less discriminative for similar instances or insufficient in the utilization of spatio-temporal information. In this work, we propose to integrate transductive and inductive learning into a unified framework to exploit the complementarity between them for accurate and robust video object segmentation. The proposed approach consists of two functional branches. The transduction branch adopts a lightweight transformer architecture to aggregate rich spatio-temporal cues while the induction branch performs online inductive learning to obtain discriminative target information. To bridge these two diverse branches, a two-head label encoder is introduced to learn the suitable target prior for each of them. The generated mask encodings are further forced to be disentangled to better retain their complementarity. Extensive experiments on several prevalent benchmarks show that, without the need of synthetic training data, the proposed approach sets a series of new state-of-the-art records. Code is available at https://github.com/maoyunyao/JOINT.
引用
收藏
页码:9650 / 9659
页数:10
相关论文
共 58 条
[1]  
[Anonymous], 2018, CVPR, DOI DOI 10.1109/CVPR.2018.00680
[2]  
[Anonymous], 2017, CVPR, DOI DOI 10.1109/CVPR.2017.565
[3]  
[Anonymous], 2017, CVPR, DOI DOI 10.1109/CVPR.2017.181
[4]  
[Anonymous], 2019, CVPR, DOI DOI 10.1109/CVPR.2019.00916
[5]  
[Anonymous], 2018, COMP VIS ECCV 2018 W, DOI DOI 10.1163/9789004385580002
[6]  
[Anonymous], 2020, CVPR, DOI DOI 10.1109/CVPR42600.2020.00033
[7]  
[Anonymous], 2018, CVPR, DOI DOI 10.1109/CVPR.2018.00626
[8]  
[Anonymous], 2019, CVPR, DOI DOI 10.1109/CVPR.2019.00971
[9]  
[Anonymous], 2018, CVPR, DOI DOI 10.1109/CVPR.2018.00314
[10]  
[Anonymous], 2018, ECCV, DOI DOI 10.1007/978-3