An Empirical Study of Training End-to-End Vision-and-Language Transformers

被引:168
作者
Dou, Zi-Yi [1 ]
Xu, Yichong [2 ]
Gan, Zhe [2 ]
Wang, Jianfeng [2 ]
Wang, Shuohang [2 ]
Wang, Lijuan [2 ]
Zhu, Chenguang [2 ]
Zhang, Pengchuan [2 ]
Yuan, Lu [2 ]
Peng, Nanyun [1 ]
Liu, Zicheng [2 ]
Zeng, Michael [2 ]
机构
[1] Univ Calif Los Angeles, Los Angeles, CA 90024 USA
[2] Microsoft Corp, Redmond, WA 98052 USA
来源
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022) | 2022年
关键词
D O I
10.1109/CVPR52688.2022.01763
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Vision-and-language (VL) pre-training has proven to be highly effective on various VL downstream tasks. While recent work has shown that fully transformer-based VL models can be more efficient than previous region-feature-based methods, their performance on downstream tasks often degrades significantly. In this paper, we present Meter, a Multimodal End-to-end TransformER framework, through which we investigate how to design and pre-train a fully transformer-based VL model in an end-to-end manner. Specifically, we dissect the model designs along multiple dimensions: vision encoders (e.g., CLIP-ViT, Swin transformer), text encoders (e.g., RoBERTa, De-BERTa), multimodal fusion module (e.g., merged attention vs. co-attention), architectural design (e.g., encoder-only vs. encoder-decoder), and pre-training objectives (e.g., masked image modeling). We conduct comprehensive experiments and provide insights on how to train a performant VL transformer. Meterachieves an accuracy of 77.64% on the VQAv2 test-std set using only 4M images for pre-training, surpassing the state-of-the-art region-feature-based model by 1.04%, and outperforming the previous best fully transformer-based model by 1.6%. Notably, when further scaled up, our best VQA model achieves an accuracy of 80.54%. Code and pre-trained models are released at https://github.com/zdou0830/METER.
引用
收藏
页码:18145 / 18155
页数:11
相关论文
共 57 条
[1]  
[Anonymous], 2015, INT C COMP VIS ICCV
[2]  
[Anonymous], 2011, C NEUR INF PROC SYST
[3]  
[Anonymous], 2016, Exploring the limits of language modeling
[4]   VQA: Visual Question Answering [J].
Antol, Stanislaw ;
Agrawal, Aishwarya ;
Lu, Jiasen ;
Mitchell, Margaret ;
Batra, Dhruv ;
Zitnick, C. Lawrence ;
Parikh, Devi .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :2425-2433
[5]  
Bao Hangbo, 2021, BEIT BERT PRE TRAINI
[6]  
Bugliarello Emanuele, 2021, T ASS COMPUTATIONAL
[7]  
Changpinyo Soravit, 2021, C COMP VIS PATT REC
[8]   UNITER: UNiversal Image-TExt Representation Learning [J].
Chen, Yen-Chun ;
Li, Linjie ;
Yu, Licheng ;
El Kholy, Ahmed ;
Ahmed, Faisal ;
Gan, Zhe ;
Cheng, Yu ;
Liu, Jingjing .
COMPUTER VISION - ECCV 2020, PT XXX, 2020, 12375 :104-120
[9]  
Cho J, 2021, PR MACH LEARN RES, V139
[10]  
Clark K., 2020, P 8 INT C LEARNING R, P1