VX2TEXT: End-to-End Learning of Video-Based Text Generation From Multimodal Inputs

被引:33
作者
Lin, Xudong [1 ]
Bertasius, Gedas [2 ]
Wang, Jue [2 ]
Chang, Shih-Fu [1 ]
Parikh, Devi [2 ,3 ]
Torresani, Lorenzo [2 ,4 ]
机构
[1] Columbia Univ, New York, NY 10027 USA
[2] Facebook AI, Menlo Pk, CA USA
[3] Georgia Tech, Atlanta, GA USA
[4] Dartmouth, Hanover, NH USA
来源
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021 | 2021年
关键词
ATTENTION;
D O I
10.1109/CVPR46437.2021.00693
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We present VX2TEXT, a framework for text generation from multimodal inputs consisting of video plus text, speech, or audio. In order to leverage transformer networks, which have been shown to be effective at modeling language, each modality is first converted into a set of language embeddings by a learnable tokenizer. This allows our approach to perform multimodal fusion in the language space, thus eliminating the need for ad-hoc cross-modal fusion modules. To address the non-differentiability of tokenization on continuous inputs (e.g., video or audio), we utilize a relaxation scheme that enables end-to-end training. Furthermore, unlike prior encoder-only models, our network includes an autoregressive decoder to generate open-ended text from the multimodal embeddings fused by the language encoder. This renders our approach fully generative and makes it directly applicable to different "video+x to text" problems without the need to design specialized network heads for each task. The proposed framework is not only conceptually simple but also remarkably effective: experiments demonstrate that our approach based on a single architecture outperforms the state-of-the-art on three video-based text-generation tasks-captioning, question answering and audio-visual scene-aware dialog.
引用
收藏
页码:7001 / 7011
页数:11
相关论文
共 58 条
[31]   TMT: A Transformer-based Modal Translator for Improving Multimodal Sequence Representations in Audio Visual Scene-aware Dialog [J].
Li, Wubo ;
Jiang, Dongwei ;
Zou, Wei ;
Li, Xiangang .
INTERSPEECH 2020, 2020, :3501-3505
[32]   Oscar: Object-Semantics Aligned Pre-training for Vision-Language Tasks [J].
Li, Xiujun ;
Yin, Xi ;
Li, Chunyuan ;
Zhang, Pengchuan ;
Hu, Xiaowei ;
Zhang, Lei ;
Wang, Lijuan ;
Hu, Houdong ;
Dong, Li ;
Wei, Furu ;
Choi, Yejin ;
Gao, Jianfeng .
COMPUTER VISION - ECCV 2020, PT XXX, 2020, 12375 :121-137
[33]  
Lu JS, 2019, ADV NEUR IN, V32
[34]  
Lu JS, 2016, ADV NEUR IN, V29
[35]  
Luo Huaishao, 2020, ARXIV200206353
[36]   A dataset and exploration of models for understanding video data through fill-in-the-blank question-answering [J].
Maharaj, Tegan ;
Ballas, Nicolas ;
Rohrbach, Anna ;
Courville, Aaron ;
Pal, Christopher .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :7359-7368
[37]   BLEU: a method for automatic evaluation of machine translation [J].
Papineni, K ;
Roukos, S ;
Ward, T ;
Zhu, WJ .
40TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, PROCEEDINGS OF THE CONFERENCE, 2002, :311-318
[38]   Flickr30k Entities: Collecting Region-to-Phrase Correspondences for Richer Image-to-Sentence Models [J].
Plummer, Bryan A. ;
Wang, Liwei ;
Cervantes, Chris M. ;
Caicedo, Juan C. ;
Hockenmaier, Julia ;
Lazebnik, Svetlana .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :2641-2649
[39]  
Raffel C., 2019, ABS191010683 ARXIV
[40]   Design a robust controller for active queue management in large delay networks [J].
Ren, FY ;
Lin, C ;
Wei, B .
ISCC2004: NINTH INTERNATIONAL SYMPOSIUM ON COMPUTERS AND COMMUNICATIONS, VOLS 1 AND 2, PROCEEDINGS, 2004, :748-754