SWINBERT: End-to-End Transformers with Sparse Attention for Video Captioning

被引:136
作者
Lin, Kevin [1 ]
Li, Linjie [1 ]
Lin, Chung-Ching [1 ]
Ahmed, Faisal [1 ]
Gan, Zhe [1 ]
Liu, Zicheng [1 ]
Lu, Yumao [1 ]
Wang, Lijuan [1 ]
机构
[1] Microsoft, Redmond, WA 98052 USA
来源
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022) | 2022年
关键词
D O I
10.1109/CVPR52688.2022.01742
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The canonical approach to video captioning dictates a caption generation model to learn from offline-extracted dense video features. These feature extractors usually operate on video frames sampled at a fixed frame rate and are often trained on image/video understanding tasks, without adaption to video captioning data. In this work, we present SWINBERT, an end-to-end transformer-based model for video captioning, which takes video frame patches directly as inputs, and outputs a natural language description. Instead of leveraging multiple 2D/3D feature extractors, our method adopts a video transformer to encode spatialtemporal representations that can adapt to variable lengths of video input without dedicated design for different frame rates. Based on this model architecture, we show that video captioning can benefit significantly from more densely sampled video frames as opposed to previous successes with sparsely sampled video frames for video-and-language understanding tasks (e.g., video question answering). Moreover; to avoid the inherent redundancy in consecutive video frames, we propose adaptively learning a sparse attention mask and optimizing it for task-specific performance improvement through better long-range video sequence modeling. Through extensive experiments on 5 video captioning datasets, we show that SWINBERT achieves acrossthe-board performance improvements over previous methods, often by a large margin. The learned sparse attention masks in addition push the limit to new state of the arts, and can be transferred between different video lengths and between different datasets. Code is available at https: //github.com/microsoft/SwinBERT.
引用
收藏
页码:17928 / 17937
页数:10
相关论文
共 61 条
[1]  
[Anonymous], 2019, CVPR, DOI DOI 10.1109/CVPR.2019.01040
[2]  
[Anonymous], 2019, IJCAI
[3]  
[Anonymous], 2019, CVPR, DOI DOI 10.1109/CVPR.2019.01277
[4]  
[Anonymous], 2019, CVPR, DOI DOI 10.1109/CVPR.2019.00854
[5]  
Arnab Anurag, 2021, ICCV
[6]  
Bain Max, 2021, ICCV, P1728
[7]  
Banerjee S, 2005, P ACL WORKSH INTR EX, P65
[8]  
Bertasius G, 2021, PR MACH LEARN RES, V139
[9]  
Carreira J., 2017, CVPR 2017
[10]  
Chen D., 2011, P 49 ANN M ASS COMP, P190