Distill-VQ: Learning Retrieval Oriented Vector Quantization By Distilling Knowledge from Dense Embeddings

被引:8
作者
Xiao, Shitao [1 ,5 ]
Liu, Zheng [2 ]
Han, Weihao [3 ]
Zhang, Jianjin [3 ]
Lian, Defu [4 ]
Gong, Yeyun [2 ]
Chen, Qi [2 ]
Yang, Fan [2 ]
Sun, Hao [3 ]
Shao, Yingxia [1 ]
Xie, Xing [2 ]
机构
[1] Beijing Univ Posts & Telecommun, Beijing, Peoples R China
[2] Microsoft Res Asia, Beijing, Peoples R China
[3] Microsoft Search Technol Ctr, Beijing, Peoples R China
[4] Univ Sci & Technol China, Hefei, Peoples R China
[5] Microsoft, Beijing, Peoples R China
来源
PROCEEDINGS OF THE 45TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '22) | 2022年
基金
中国国家自然科学基金;
关键词
Vector Quantization; Knowledge Distillation; Embedding Based Retrieval; Approximate Nearest Neighbour Search;
D O I
10.1145/3477495.3531799
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Vector quantization (VQ) based ANN indexes, such as Inverted File System (IVF) and Product Quantization (PQ), have been widely applied to embedding based document retrieval thanks to the competitive time and memory efficiency. Originally, VQ is learned to minimize the reconstruction loss, i.e., the distortions between the original dense embeddings and the reconstructed embeddings after quantization. Unfortunately, such an objective is inconsistent with the goal of selecting ground-truth documents for the input query, which may cause severe loss of retrieval quality. Recent works identify such a defect, and propose to minimize the retrieval loss through contrastive learning. However, these methods intensively rely on queries with ground-truth documents, whose performance is limited by the insufficiency of labeled data. In this paper, we propose Distill-VQ, which unifies the learning of IVF and PQ within a knowledge distillation framework. In DistillVQ, the dense embeddings are leveraged as "teachers", which predict the query's relevance to the sampled documents. The VQ modules are treated as the "students", which are learned to reproduce the predicted relevance, such that the reconstructed embeddings may fully preserve the retrieval result of the dense embeddings. By doing so, Distill-VQ is able to derive substantial training signals from the massive unlabeled data, which significantly contributes to the retrieval quality. We perform comprehensive explorations for the optimal conduct of knowledge distillation, which may provide useful insights for the learning of VQ based ANN index. We also experimentally show that the labeled data is no longer a necessity for high-quality vector quantization, which indicates Distill-VQ's strong applicability in practice. The evaluations are performed on MS MARCO and Natural Questions benchmarks, where Distill-VQ notably outperforms the SOTA VQ methods in Recall and MRR. Our code is avaliable at https://github.com/staoxiao/LibVQ.
引用
收藏
页码:1513 / 1523
页数:11
相关论文
共 54 条
[1]   The Inverted Multi-Index [J].
Babenko, Artem ;
Lempitsky, Victor .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2015, 37 (06) :1247-1260
[2]   Revisiting the Inverted Indices for Billion-Scale Approximate Nearest Neighbors [J].
Baranchuk, Dmitry ;
Babenko, Artem ;
Malkov, Yury .
COMPUTER VISION - ECCV 2018, PT XII, 2018, 11216 :209-224
[3]  
Cao H, 2007, PROCEEDINGS OF THE 26TH CHINESE CONTROL CONFERENCE, VOL 4, P129
[4]   Deep Visual-Semantic Quantization for Efficient Image Retrieval [J].
Cao, Yue ;
Long, Mingsheng ;
Wang, Jianmin ;
Liu, Shichen .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :916-925
[5]  
Cao Y, 2016, AAAI CONF ARTIF INTE, P3457
[6]  
Chang Wei-Cheng, 2020, ARXIV200203932
[7]  
Chen Qi, 2021, ARXIV211108566
[8]  
Chen T., 2020, ADV NEURAL INF PROCE, V33, P22243
[9]  
Chen T, 2020, PR MACH LEARN RES, V119
[10]  
Devlin J., 2019, North American Chapter of the Association for Computational Linguistics, V1, P4171, DOI [DOI 10.48550/ARXIV.1810.04805, DOI 10.18653/V1/N19-1423, 10.48550/ARXIV.1810.04805]