Word Representation Learning in Multimodal Pre-Trained Transformers: An Intrinsic Evaluation

被引:4
作者
Pezzelle, Sandro [1 ]
Takmaz, Ece [1 ]
Fernandez, Raquel [1 ]
机构
[1] Univ Amsterdam, Inst Log Language & Computat, Amsterdam, Netherlands
基金
欧洲研究理事会;
关键词
DISTRIBUTIONAL SEMANTICS; MODELS;
D O I
10.1162/tacl_a_00443
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This study carries out a systematic intrinsic evaluation of the semantic representations learned by state-of-the-art pre-trained multimodal Transformers. These representations are claimed to be task-agnostic and shown to help on many downstream language-and-vision tasks. However, the extent to which they align with human semantic intuitions remains unclear. We experiment with various models and obtain static word representations from the contextualized ones they learn. We then evaluate them against the semantic judgments provided by human speakers. In linewith previous evidence, we observe a generalized advantage of multimodal representations over languageonly ones on concrete word pairs, but not on abstract ones. On the one hand, this confirms the effectiveness of these models to align language and vision, which results in better semantic representations for concepts that are grounded in images. On the other hand, models are shown to follow different representation learning patterns, which sheds some light on how and when they perform multimodal integration.
引用
收藏
页码:1563 / 1579
页数:17
相关论文
共 64 条
  • [61] Wang SN, 2018, AAAI CONF ARTIF INTE, P5973
  • [62] Westera M., 2019, P 13 INT C COMPUTATI, P120, DOI [10.18653/v1/W19-0410, DOI 10.18653/V1/W19-0410]
  • [63] Yen-Chun Chen, 2020, Computer Vision - ECCV 2020 16th European Conference. Proceedings. Lecture Notes in Computer Science (LNCS 12375), P104, DOI 10.1007/978-3-030-58577-8_7
  • [64] Zablocki Eloi, 2018, P AAAI C ART INT, V32