Universal Multimodal Representation for Language Understanding

被引:11
作者
Zhang, Zhuosheng [1 ,2 ]
Chen, Kehai [3 ]
Wang, Rui [1 ,2 ]
Utiyama, Masao [4 ]
Sumita, Eiichiro [4 ]
Li, Zuchao [1 ,2 ]
Zhao, Hai [1 ,2 ]
机构
[1] Shanghai Jiao Tong Univ, Dept Comp Sci & Engn, Shanghai 200240, Peoples R China
[2] Shanghai Jiao Tong Univ, Key Lab, Shanghai Educ Commiss Intelligent Interact & Cogn, Shanghai 200240, Peoples R China
[3] Harbin Inst Technol, Sch Comp Sci & Technol, Shenzhen 518055, Peoples R China
[4] NICT, Koganei, Tokyo 1848795, Japan
基金
中国国家自然科学基金;
关键词
Task analysis; Visualization; Machine translation; Feature extraction; Annotations; Transformers; Standards; Artificial intelligence; natural language understanding; vision-language modeling; multimodal machine translation;
D O I
10.1109/TPAMI.2023.3234170
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Representation learning is the foundation of natural language processing (NLP). This work presents new methods to employ visual information as assistant signals to general NLP tasks. For each sentence, we first retrieve a flexible number of images either from a light topic-image lookup table extracted over the existing sentence-image pairs or a shared cross-modal embedding space that is pre-trained on out-of-shelf text-image pairs. Then, the text and images are encoded by a Transformer encoder and convolutional neural network, respectively. The two sequences of representations are further fused by an attention layer for the interaction of the two modalities. In this study, the retrieval process is controllable and flexible. The universal visual representation overcomes the lack of large-scale bilingual sentence-image pairs. Our method can be easily applied to text-only tasks without manually annotated multimodal parallel corpora. We apply the proposed method to a wide range of natural language generation and understanding tasks, including neural machine translation, natural language inference, and semantic similarity. Experimental results show that our method is generally effective for different tasks and languages. Analysis indicates that the visual signals enrich textual representations of content words, provide fine-grained grounding information about the relationship between concepts and events, and potentially conduce to disambiguation.
引用
收藏
页码:9169 / 9185
页数:17
相关论文
共 91 条
  • [1] Abnar S, 2020, 58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020), P4190
  • [2] [Anonymous], 2013, P 2013 C EMPIRICAL M
  • [3] [Anonymous], 2016, PROC C EMPIRICAL MET
  • [4] Bahdanau D, 2016, Arxiv, DOI [arXiv:1409.0473, 10.48550/arXiv.1409.0473, DOI 10.48550/ARXIV.1409.0473]
  • [5] Multimodal Machine Learning: A Survey and Taxonomy
    Baltrusaitis, Tadas
    Ahuja, Chaitanya
    Morency, Louis-Philippe
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2019, 41 (02) : 423 - 443
  • [6] Bentivogli L., 2009, P 2 TEXT AN C
  • [7] Bowman S. R., 2015, P 2015 C EMP METH NA, P632, DOI [DOI 10.18653/V1/D15, DOI 10.18653/V1/D15-1075, 10.18653/v1/D15-1075]
  • [8] Brownlee J., 2019, Mach. Learn. Mastery
  • [9] Caglayan O, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4159
  • [10] Cer D., 2017, SEMEVAL, P1, DOI [DOI 10.18653/V1/S17-2001, 10.18653/v1/S17-2001]