Universal Multimodal Representation for Language Understanding

被引:11
作者
Zhang, Zhuosheng [1 ,2 ]
Chen, Kehai [3 ]
Wang, Rui [1 ,2 ]
Utiyama, Masao [4 ]
Sumita, Eiichiro [4 ]
Li, Zuchao [1 ,2 ]
Zhao, Hai [1 ,2 ]
机构
[1] Shanghai Jiao Tong Univ, Dept Comp Sci & Engn, Shanghai 200240, Peoples R China
[2] Shanghai Jiao Tong Univ, Key Lab, Shanghai Educ Commiss Intelligent Interact & Cogn, Shanghai 200240, Peoples R China
[3] Harbin Inst Technol, Sch Comp Sci & Technol, Shenzhen 518055, Peoples R China
[4] NICT, Koganei, Tokyo 1848795, Japan
基金
中国国家自然科学基金;
关键词
Task analysis; Visualization; Machine translation; Feature extraction; Annotations; Transformers; Standards; Artificial intelligence; natural language understanding; vision-language modeling; multimodal machine translation;
D O I
10.1109/TPAMI.2023.3234170
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Representation learning is the foundation of natural language processing (NLP). This work presents new methods to employ visual information as assistant signals to general NLP tasks. For each sentence, we first retrieve a flexible number of images either from a light topic-image lookup table extracted over the existing sentence-image pairs or a shared cross-modal embedding space that is pre-trained on out-of-shelf text-image pairs. Then, the text and images are encoded by a Transformer encoder and convolutional neural network, respectively. The two sequences of representations are further fused by an attention layer for the interaction of the two modalities. In this study, the retrieval process is controllable and flexible. The universal visual representation overcomes the lack of large-scale bilingual sentence-image pairs. Our method can be easily applied to text-only tasks without manually annotated multimodal parallel corpora. We apply the proposed method to a wide range of natural language generation and understanding tasks, including neural machine translation, natural language inference, and semantic similarity. Experimental results show that our method is generally effective for different tasks and languages. Analysis indicates that the visual signals enrich textual representations of content words, provide fine-grained grounding information about the relationship between concepts and events, and potentially conduce to disambiguation.
引用
收藏
页码:9169 / 9185
页数:17
相关论文
共 91 条
  • [21] Structured Multimodal Attentions for TextVQA
    Gao, Chenyu
    Zhu, Qi
    Wang, Peng
    Li, Hui
    Liu, Yuliang
    van den Hengel, Anton
    Wu, Qi
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (12) : 9603 - 9614
  • [22] A Convolutional Encoder Model for Neural Machine Translation
    Gehring, Jonas
    Auli, Michael
    Grangier, David
    Dauphin, Yann N.
    [J]. PROCEEDINGS OF THE 55TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2017), VOL 1, 2017, : 123 - 135
  • [23] End-to-End Learning of Deep Visual Representations for Image Retrieval
    Gordo, Albert
    Almazan, Jon
    Revaud, Jerome
    Larlus, Diane
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2017, 124 (02) : 237 - 254
  • [24] Gronroos S., 2018, P 3 C MACHINE TRANSL, P603, DOI 10.18653/v1/w18-6439
  • [25] Groves, 2003, NAT RESOUR RES, V12, P141, DOI DOI 10.1023/A:1024218913435
  • [26] A Survey on Vision Transformer
    Han, Kai
    Wang, Yunhe
    Chen, Hanting
    Chen, Xinghao
    Guo, Jianyuan
    Liu, Zhenhua
    Tang, Yehui
    Xiao, An
    Xu, Chunjing
    Xu, Yixing
    Yang, Zhaohui
    Zhang, Yiman
    Tao, Dacheng
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (01) : 87 - 110
  • [27] DISTRIBUTIONAL STRUCTURE
    Harris, Zellig S.
    [J]. WORD-JOURNAL OF THE INTERNATIONAL LINGUISTIC ASSOCIATION, 1954, 10 (2-3): : 146 - 162
  • [28] Deep Residual Learning for Image Recognition
    He, Kaiming
    Zhang, Xiangyu
    Ren, Shaoqing
    Sun, Jian
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 770 - 778
  • [29] Hewitt J, 2018, PROCEEDINGS OF THE 56TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL), VOL 1, P2566
  • [30] Hill F., 2019, PROC 7 INT C LEARN R