Sparse Composite Quantization

被引:0
|
作者
Zhang, Ting [1 ]
Qi, Guo-Jun [2 ]
Tang, Jinhui [3 ]
Wang, Jingdong [4 ]
机构
[1] Univ Sci & Technol China, Hefei Shi, Anhui Sheng, Peoples R China
[2] Univ Cent Florida, Orlando, FL 32816 USA
[3] Nanjing Univ Sci & Technol, Nanjing, Jiangsu, Peoples R China
[4] Microsoft Res, Beijing, Peoples R China
关键词
TREES;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The quantization techniques have shown competitive performance in approximate nearest neighbor search. The state-of-the-art algorithm, composite quantization, takes advantage of the compositionabity, i.e., the vector approximation accuracy, as opposed to product quantization and Cartesian k-means. However, we have observed that the runtime cost of computing the distance table in composite quantization, which is used as a lookup table for fast distance computation, becomes nonnegligible in real applications, e.g., reordering the candidates retrieved from the inverted index when handling very large scale databases. To address this problem, we develop a novel approach, called sparse composite quantization, which constructs sparse dictionaries. The benefit is that the distance evaluation between the query and the dictionary element (a sparse vector) is accelerated using the efficient sparse vector operation, and thus the cost of distance table computation is reduced a lot. Experiment results on large scale ANN retrieval tasks (1M SIFTs and 1B SIFTs) and applications to object retrieval show that the proposed approach yields competitive performance: superior search accuracy to product quantization and Cartesian k-means with almost the same computing cost, and much faster ANN search than composite quantization with the same level of accuracy.
引用
收藏
页码:4548 / 4556
页数:9
相关论文
共 50 条
  • [1] Quantization of sparse representations
    Boufounos, Petros
    Baraniuk, Richard
    DCC 2007: DATA COMPRESSION CONFERENCE, PROCEEDINGS, 2007, : 378 - 378
  • [2] Focused Quantization for Sparse CNNs
    Zhao, Yiren
    Gao, Xitong
    Bates, Daniel
    Mullins, Robert
    Xu, Cheng-Zhong
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [3] Sparse DP Quantization Algorithm
    Bandoh, Yukihiro
    Takamura, Seishi
    Shimizu, Atsushi
    IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES, 2019, E102A (03) : 553 - 565
  • [4] Sparse Quantization for Patch Description
    Boix, Xavier
    Gygli, Michael
    Roig, Gemma
    Van Gool, Luc
    2013 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2013, : 2842 - 2849
  • [5] Composite Quantization
    Wang, Jingdong
    Zhang, Ting
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2019, 41 (06) : 1308 - 1322
  • [6] Quantization of Eigen Subspace for Sparse Representation
    Yilmaz, Onur
    Akansu, Ali N.
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2015, 63 (14) : 3576 - 3585
  • [7] On the Uniform Quantization of a Class of Sparse Sources
    Fraysse, Aurelia
    Pesquet-Popescu, Beatrice
    Pesquet, Jean-Christophe
    IEEE TRANSACTIONS ON INFORMATION THEORY, 2009, 55 (07) : 3243 - 3263
  • [8] Serial Quantization for Sparse Time Sequences
    Cohen, Alejandro
    Shlezinger, Nir
    Salamatian, Salman
    Eldar, Yonina C.
    Medard, Muriel
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2021, 69 : 3299 - 3314
  • [9] EMPIRICAL QUANTIZATION FOR SPARSE SAMPLING SYSTEMS
    Lexa, Michael A.
    2010 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2010, : 3942 - 3945
  • [10] Distributed Quantization for Sparse Time Sequences
    Cohen, Alejandro
    Shlezinger, Nir
    Salamatian, Salman
    Eldar, Yonina C.
    Medard, Muriel
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 5580 - 5584