Correspondence Matters for Video Referring Expression Comprehension

被引:8
作者
Cao, Meng [1 ]
Jiang, Ji [1 ]
Chen, Long [2 ]
Zou, Yuexian [1 ,3 ]
机构
[1] Peking Univ, SECE, Beijing, Peoples R China
[2] Columbia Univ, New York, NY 10027 USA
[3] Peng Cheng Lab, Shenzhen, Peoples R China
来源
PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022 | 2022年
关键词
Video Referring Expression Comprehension; Inter-Frame Contrastive Learning; Cross-Modal Contrastive Learning; TRACKING;
D O I
10.1145/3503161.3547756
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
We investigate the problem of video Referring Expression Comprehension (REC), which aims to localize the referent objects described in the sentence to visual regions in the video frames. Despite the recent progress, existing methods suffer from two problems: 1) inconsistent localization results across video frames; 2) confusion between the referent and contextual objects. To this end, we propose a novel Dual Correspondence Network (dubbed as DCNet) which explicitly enhances the dense associations in both the inter-frame and cross-modal manners. Firstly, we aim to build the inter-frame correlations for all existing instances within the frames. Specifically, we compute the inter-frame patch-wise cosine similarity to estimate the dense alignment and then perform the inter-frame contrastive learning to map them close in feature space. Secondly, we propose to build the fine-grained patch-word alignment to associate each patch with certain words. Due to the lack of this kind of detailed annotations, we also predict the patch-word correspondence through the cosine similarity. Extensive experiments demonstrate that our DCNet achieves state-of-the-art performance on both video and image REC benchmarks. Furthermore, we conduct comprehensive ablation studies and thorough analyses to explore the optimal model designs. Notably, our inter-frame and cross-modal contrastive losses are plug-and-play functions and are applicable to any video REC architectures. For example, by building on top of Co-grounding [44], we boost the performance by 1.48% absolute improvement on Accu.@0.5 for VID-Sentence dataset. Our codes are available at https://github.com/mengcaopku/DCNet.
引用
收藏
页码:4967 / 4976
页数:10
相关论文
共 72 条
  • [1] VQA: Visual Question Answering
    Antol, Stanislaw
    Agrawal, Aishwarya
    Lu, Jiasen
    Mitchell, Margaret
    Batra, Dhruv
    Zitnick, C. Lawrence
    Parikh, Devi
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 2425 - 2433
  • [2] Arandjelovic R, 2018, IEEE T PATTERN ANAL, V40, P1437, DOI [10.1109/TPAMI.2017.2711011, 10.1109/CVPR.2016.572]
  • [3] Bolme DS, 2010, PROC CVPR IEEE, P2544, DOI 10.1109/CVPR.2010.5539960
  • [4] Cao Meng, 2021, ARXIV210805607
  • [5] Cao Meng, 2021, P 2021 C EMP METH NA
  • [6] TOUCHDOWN: Natural Language Navigation and Spatial Reasoning in Visual Street Environments
    Chen, Howard
    Suhr, Alane
    Misra, Dipendra
    Snavely, Noah
    Artzi, Yoav
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 12530 - 12539
  • [7] Human-like Controllable Image Captioning with Verb-specific Semantic Roles
    Chen, Long
    Jiang, Zhihong
    Xiao, Jun
    Liu, Wei
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 16841 - 16851
  • [8] Chen L, 2021, AAAI CONF ARTIF INTE, V35, P1036
  • [9] Counterfactual Critic Multi-Agent Training for Scene Graph Generation
    Chen, Long
    Zhang, Hanwang
    Xiao, Jun
    He, Xiangnan
    Pu, Shiliang
    Chang, Shih-Fu
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 4612 - 4622
  • [10] SCA-CNN: Spatial and Channel-wise Attention in Convolutional Networks for Image Captioning
    Chen, Long
    Zhang, Hanwang
    Xiao, Jun
    Nie, Liqiang
    Shao, Jian
    Liu, Wei
    Chua, Tat-Seng
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 6298 - 6306