Language-Bridged Spatial-Temporal Interaction for Referring Video Object Segmentation

被引:39
作者
Ding, Zihan [1 ,4 ,5 ]
Hui, Tianrui [2 ,3 ]
Huang, Junshi [4 ]
Wei, Xiaoming [4 ]
Han, Jizhong [2 ,3 ]
Liu, Si [1 ,5 ]
机构
[1] Beihang Univ, Inst Artificial Intelligence, Beijing, Peoples R China
[2] Chinese Acad Sci, Inst Informat Engn, Beijing, Peoples R China
[3] Univ Chinese Acad Sci, Sch Cyber Secur, Beijing, Peoples R China
[4] Meituan, Beijing, Peoples R China
[5] Beihang Univ, Hangzhou Innovat Inst, Hangzhou, Peoples R China
来源
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022) | 2022年
基金
中国国家自然科学基金;
关键词
D O I
10.1109/CVPR52688.2022.00491
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Referring video object segmentation aims to predict foreground labels for objects referred by natural language expressions in videos. Previous methods either depend on 3D ConvNets or incorporate additional 2D ConvNets as encoders to extract mixed spatial-temporal features. However, these methods suffer from spatial misalignment or false distractors due to delayed and implicit spatial-temporal interaction occurring in the decoding phase. To tackle these limitations, we propose a Language-Bridged Duplex Transfer (LBDT) module which utilizes language as an intermediary bridge to accomplish explicit and adaptive spatialtemporal interaction earlier in the encoding phase. Concretely, cross-modal attention is performed among the temporal encoder, referring words and the spatial encoder to aggregate and transfer language-relevant motion and appearance information. In addition, we also propose a Bilateral Channel Activation (BCA) module in the decoding phase for further denoising and highlighting the spatialtemporal consistent features via channel-wise activation. Extensive experiments show our method achieves new stateof-the-art performances on four popular benchmarks with 6.8% and 6.9% absolute AP gains on A2D Sentences and J-HMDB Sentences respectively, while consuming around 7x less computational overhead(1).
引用
收藏
页码:4954 / 4963
页数:10
相关论文
共 42 条
[1]  
[Anonymous], 2015, CVPR
[2]  
[Anonymous], 2018, CVPR, DOI DOI 10.1109/CVPR.2018.00624
[3]  
[Anonymous], 2021, CVPR, DOI DOI 10.1109/CVPR46437.2021.00585
[4]  
[Anonymous], 2020, AAAI
[5]   Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset [J].
Carreira, Joao ;
Zisserman, Andrew .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :4724-4733
[6]   Multi-fiber Networks for Video Recognition [J].
Chen, Yunpeng ;
Kalantidis, Yannis ;
Li, Jianshu ;
Yan, Shuicheng ;
Feng, Jiashi .
COMPUTER VISION - ECCV 2018, PT I, 2018, 11205 :364-380
[7]  
Fu Tsu-Jui, 2021, ARXIV210401122
[8]  
Ge Wenbin, 2021, CVPR
[9]  
He Kaiming, 2016, Lecture Notes in Computer Science, V9908, P630, DOI [DOI 10.1109/CVPR.2016.90, 10.1007/978-3-319-46493-0_38, DOI 10.1007/978-3-319-46493-0_38]
[10]  
Hochreiter S, 1997, NEURAL COMPUT, V9, P1735, DOI [10.1007/978-3-642-24797-2, 10.1162/neco.1997.9.1.1]