Pseudo-Relevance Feedback for Multiple Representation Dense Retrieval

被引:33
作者
Wang, Xiao [1 ]
Macdonald, Craig [1 ]
Tonellotto, Nicola [2 ]
Ounis, Iadh [1 ]
机构
[1] Univ Glasgow, Glasgow, Lanark, Scotland
[2] Univ Pisa, Pisa, Italy
来源
PROCEEDINGS OF THE 2021 ACM SIGIR INTERNATIONAL CONFERENCE ON THEORY OF INFORMATION RETRIEVAL, ICTIR 2021 | 2021年
基金
英国工程与自然科学研究理事会;
关键词
Query Expansion; Pseudo-Relevance Feedback; BERT; Dense Retrieval;
D O I
10.1145/3471158.3472250
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Pseudo-relevance feedback mechanisms, from Rocchio to the relevance models, have shown the usefulness of expanding and reweighting the users' initial queries using information occurring in an initial set of retrieved documents, known as the pseudo-relevant set. Recently, dense retrieval - through the use of neural contextual language models such as BERT for analysing the documents' and queries' contents and computing their relevance scores - has shown a promising performance on several information retrieval tasks still relying on the traditional inverted index for identifying documents relevant to a query. Two different dense retrieval families have emerged: the use of single embedded representations for each passage and query (e.g. using BERT's [CLS] token), or via multiple representations (e.g. using an embedding for each token of the query and document). In this work, we conduct the first study into the potential for multiple representation dense retrieval to be enhanced using pseudo-relevance feedback. In particular, based on the pseudo-relevant set of documents identified using a first-pass dense retrieval, we extract representative feedback embeddings (using KMeans clustering) - while ensuring that these embeddings discriminate among passages (based on IDF) - which are then added to the query representation. These additional feedback embeddings are shown to both enhance the effectiveness of a reranking as well as an additional dense retrieval operation. Indeed, experiments on the MSMARCO passage ranking dataset show that MAP can be improved by upto 26% on the TREC 2019 query set and 10% on the TREC 2020 query set by the application of our proposed ColBERT-PRF method on a ColBERT dense retrieval approach.
引用
收藏
页码:297 / 306
页数:10
相关论文
共 37 条
[1]  
Abdul-Jaleel Nasreen., 2004, P TREC, P1
[2]  
Amati G, 2004, LECT NOTES COMPUT SC, V2997, P127
[3]   Probabilistic models of information retrieval based on measuring the divergence from randomness [J].
Amati, G ;
Van Rijsbergen, CJ .
ACM TRANSACTIONS ON INFORMATION SYSTEMS, 2002, 20 (04) :357-389
[4]  
Amati Giambattista, 2003, Probability information models for retrieval based on divergence from randomness
[5]  
Arthur David, 2007, P 18 ANN ACM SIAM S, P1027, DOI [10.5555/1283383.1283494, DOI 10.5555/1283383.1283494]
[6]  
Cao Guihong, 2008, P 31 ANN INT ACM SIG, P243
[7]  
Craswell Nick, 2021, P TREC
[8]  
Craswell Nick, 2020, TEXT RETRIEVAL C TRE
[9]   Context-Aware Document Term Weighting for Ad-Hoc Search [J].
Dai, Zhuyun ;
Callan, Jamie .
WEB CONFERENCE 2020: PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE (WWW 2020), 2020, :1897-1907
[10]  
Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171