On cross-lingual retrieval with multilingual text encoders

被引:0
|
作者
Robert Litschko
Ivan Vulić
Simone Paolo Ponzetto
Goran Glavaš
机构
[1] University of Mannheim,
[2] Language Technology Lab,undefined
[3] University of Cambridge,undefined
来源
Information Retrieval Journal | 2022年 / 25卷
关键词
Cross-lingual IR; Multilingual text encoders; Learning to Rank;
D O I
暂无
中图分类号
学科分类号
摘要
Pretrained multilingual text encoders based on neural transformer architectures, such as multilingual BERT (mBERT) and XLM, have recently become a default paradigm for cross-lingual transfer of natural language processing models, rendering cross-lingual word embedding spaces (CLWEs) effectively obsolete. In this work we present a systematic empirical study focused on the suitability of the state-of-the-art multilingual encoders for cross-lingual document and sentence retrieval tasks across a number of diverse language pairs. We first treat these models as multilingual text encoders and benchmark their performance in unsupervised ad-hoc sentence- and document-level CLIR. In contrast to supervised language understanding, our results indicate that for unsupervised document-level CLIR—a setup with no relevance judgments for IR-specific fine-tuning—pretrained multilingual encoders on average fail to significantly outperform earlier models based on CLWEs. For sentence-level retrieval, we do obtain state-of-the-art performance: the peak scores, however, are met by multilingual encoders that have been further specialized, in a supervised fashion, for sentence understanding tasks, rather than using their vanilla ‘off-the-shelf’ variants. Following these results, we introduce localized relevance matching for document-level CLIR, where we independently score a query against document sections. In the second part, we evaluate multilingual encoders fine-tuned in a supervised fashion (i.e., we learn to rank) on English relevance data in a series of zero-shot language and domain transfer CLIR experiments. Our results show that, despite the supervision, and due to the domain and language shift, supervised re-ranking rarely improves the performance of multilingual transformers as unsupervised base rankers. Finally, only with in-domain contrastive fine-tuning (i.e., same domain, only language transfer), we manage to improve the ranking quality. We uncover substantial empirical differences between cross-lingual retrieval results and results of (zero-shot) cross-lingual transfer for monolingual retrieval in target languages, which point to “monolingual overfitting” of retrieval models trained on monolingual (English) data, even if they are based on multilingual transformers.
引用
收藏
页码:149 / 183
页数:34
相关论文
共 12 条
  • [1] On cross-lingual retrieval with multilingual text encoders
    Litschko, Robert
    Vulic, Ivan
    Ponzetto, Simone Paolo
    Glavas, Goran
    INFORMATION RETRIEVAL JOURNAL, 2022, 25 (02): : 149 - 183
  • [2] Query by Example for Cross-Lingual Event Retrieval
    Sarwar, Sheikh Muhammad
    Allan, James
    PROCEEDINGS OF THE 43RD INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '20), 2020, : 1601 - 1604
  • [3] Evaluating Resource-Lean Cross-Lingual Embedding Models in Unsupervised Retrieval
    Litschko, Robert
    Glavas, Goran
    Vulic, Ivan
    Dietz, Laura
    PROCEEDINGS OF THE 42ND INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '19), 2019, : 1109 - 1112
  • [4] LAPCA: Language-Agnostic Pretraining with Cross-Lingual Alignment
    Abulkhanov, Dmitry
    Sorokin, Nikita
    Nikolenko, Sergey
    Malykh, Valentin
    PROCEEDINGS OF THE 46TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2023, 2023, : 2098 - 2102
  • [5] A Learning to Rank framework applied to text-image retrieval
    David Buffoni
    Sabrina Tollari
    Patrick Gallinari
    Multimedia Tools and Applications, 2012, 60 : 161 - 180
  • [6] A Learning to Rank framework applied to text-image retrieval
    Buffoni, David
    Tollari, Sabrina
    Gallinari, Patrick
    MULTIMEDIA TOOLS AND APPLICATIONS, 2012, 60 (01) : 161 - 180
  • [7] Online Learning to Rank for Cross-Language Information Retrieval
    Rahimi, Razieh
    Shakery, Azadeh
    SIGIR'17: PROCEEDINGS OF THE 40TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, 2017, : 1033 - 1036
  • [8] PL-ranking: A Novel Ranking Method for Cross-Modal Retrieval
    Zhang, Liang
    Ma, Bingpeng
    Li, Guorong
    Huang, Qingming
    Tian, Qi
    MM'16: PROCEEDINGS OF THE 2016 ACM MULTIMEDIA CONFERENCE, 2016, : 1355 - 1364
  • [9] Ensemble-based method of answers retrieval for domain specific questions from text-based documentation
    Safiulin, Iskander
    Butakov, Nikolay
    Alexandrov, Dmitriy
    Nasonov, Denis
    8TH INTERNATIONAL YOUNG SCIENTISTS CONFERENCE ON COMPUTATIONAL SCIENCE, YSC2019, 2019, 156 : 158 - 165
  • [10] Performance Comparison of Ad-Hoc Retrieval Models over Full-Text vs. Titles of Documents
    Saleh, Ahmed
    Beck, Tilman
    Galke, Lukas
    Scherp, Ansgar
    MATURITY AND INNOVATION IN DIGITAL LIBRARIES, ICADL 2018, 2018, 11279 : 290 - 303