Residual Vector Product Quantization for approximate nearest neighbor search

被引:7
作者
Niu, Lushuai [1 ]
Xu, Zhi [1 ]
Zhao, Longyang [1 ]
He, Daojing [2 ]
Ji, Jianqiu [1 ,3 ]
Yuan, Xiaoli [4 ]
Xue, Mian [4 ]
机构
[1] Guilin Univ Elect Technol, Sch Comp Informat & Secur, Guilin 541004, Peoples R China
[2] Harbin Inst Technol, Sch Comp Sci & Technol, Shenzhen 518055, Peoples R China
[3] Doodod Technol Co Ltd, Beijing, Peoples R China
[4] Hohai Univ, Nanjing 210024, Peoples R China
基金
中国国家自然科学基金;
关键词
Vector quantization; Residual structure; Index structure; Approximate nearest neighbor search;
D O I
10.1016/j.eswa.2023.120832
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Vector quantization is one of the most popular techniques for approximate nearest neighbor (ANN) search. Over the past decade, many vector quantization methods have been proposed for ANN search. However, these methods do not strike a satisfactory balance between accuracy and efficiency because of their defects in quantization structures. To overcome this problem, a quantization method, named as Residual Vector Product Quantization (RVPQ), is proposed in this study. Under this method, the data space is decomposed into several subspaces, and a residual structure consisting of several ordered residual codebooks is constructed for each subspace. Learned by using an effective joint training algorithm, the quantization structure of RVPQ is much better than other methods, and it greatly enhances the performance of ANN search. Except that, an efficient residual quantization encoding method H-Variable Beam Search is also proposed to achieve higher encoding efficiency with negligible loss of accuracy. Furthermore, Inverted Multi-Index based on RVPQ is also designed to effectively solve the ANN search for a very large-scale database. Experimental results and theoretical evaluations show that RVPQ outperforms the-state-of-the-art methods on retrieval accuracy while retaining a comparable computational complexity.
引用
收藏
页数:11
相关论文
共 50 条
  • [31] Optimized high order product quantization for approximate nearest neighbors search
    Li, Linhao
    Hu, Qinghua
    FRONTIERS OF COMPUTER SCIENCE, 2020, 14 (02) : 259 - 272
  • [32] Accelerated Approximate Nearest Neighbors Search Through Hierarchical Product Quantization
    Abdelhadi, Ameer M. S.
    Bouganis, Christos-Savvas
    Constantinides, George A.
    2019 INTERNATIONAL CONFERENCE ON FIELD-PROGRAMMABLE TECHNOLOGY (ICFPT 2019), 2019, : 90 - 98
  • [33] Asymmetric Mapping Quantization for Nearest Neighbor Search
    Hong, Weixiang
    Tang, Xueyan
    Meng, Jingjing
    Yuan, Junsong
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2020, 42 (07) : 1783 - 1790
  • [34] A New Cell-Level Search Based Non-Exhaustive Approximate Nearest Neighbor (ANN) Search Algorithm in the Framework of Product Quantization
    Wang, Yang
    Pan, Zhibin
    Li, Rui
    IEEE ACCESS, 2019, 7 : 37059 - 37070
  • [35] Adaptive bit allocation hashing for approximate nearest neighbor search
    Guo, Qin-Zhen
    Zeng, Zhi
    Zhang, Shuwu
    NEUROCOMPUTING, 2015, 151 : 719 - 728
  • [36] Vector quantization by lazy pairwise nearest neighbor method
    Kaukoranta, T
    Fränti, P
    Nevalainen, O
    OPTICAL ENGINEERING, 1999, 38 (11) : 1862 - 1868
  • [37] A vector quantization method for nearest neighbor classifier design
    Yen, CW
    Young, CN
    Nagurka, ML
    PATTERN RECOGNITION LETTERS, 2004, 25 (06) : 725 - 731
  • [38] Fast spectral analysis for approximate nearest neighbor search
    Jing Wang
    Jie Shen
    Machine Learning, 2022, 111 : 2297 - 2322
  • [39] A Revisit of Hashing Algorithms for Approximate Nearest Neighbor Search
    Cai, Deng
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2021, 33 (06) : 2337 - 2348
  • [40] Fast spectral analysis for approximate nearest neighbor search
    Wang, Jing
    Shen, Jie
    MACHINE LEARNING, 2022, 111 (06) : 2297 - 2322