Strongly Constrained Discrete Hashing

被引:44
作者
Chen, Yong [1 ]
Tian, Zhibao [2 ]
Zhang, Hui [2 ]
Wang, Jun [3 ]
Zhang, Dell [4 ,5 ]
机构
[1] Peking Univ, Sch EECS, Key Lab Machine Percept, Beijing 100871, Peoples R China
[2] Beihang Univ, Dept Comp Sci & Engn, Beijing 100191, Peoples R China
[3] UCL, Dept Comp Sci, London WC1E 6BT, England
[4] Birkbeck Univ London, Dept Comp Sci & Informat Syst, London WC1E 7HX, England
[5] Blue Prism AI Labs, London WC2B 6NH, England
基金
中国博士后科学基金;
关键词
Learning to hash; image retrieval; discrete optimization; ITERATIVE QUANTIZATION; PROCRUSTEAN APPROACH;
D O I
10.1109/TIP.2020.2963952
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Learning to hash is a fundamental technique widely used in large-scale image retrieval. Most existing methods for learning to hash address the involved discrete optimization problem by the continuous relaxation of the binary constraint, which usually leads to large quantization errors and consequently suboptimal binary codes. A few discrete hashing methods have emerged recently. However, they either completely ignore some useful constraints (specifically the balance and decorrelation of hash bits) or just turn those constraints into regularizers that would make the optimization easier but less accurate. In this paper, we propose a novel supervised hashing method named Strongly Constrained Discrete Hashing (SCDH) which overcomes such limitations. It can learn the binary codes for all examples in the training set, and meanwhile obtain a hash function for unseen samples with the above mentioned constraints preserved. Although the model of SCDH is fairly sophisticated, we are able to find closed-form solutions to all of its optimization subproblems and thus design an efficient algorithm that converges quickly. In addition, we extend SCDH to a kernelized version SCDH $_{K}$ . Our experiments on three large benchmark datasets have demonstrated that not only can SCDH and SCDH $_{K}$ achieve substantially higher MAP scores than state-of-the-art baselines, but they train much faster than those that are also supervised as well.
引用
收藏
页码:3596 / 3611
页数:16
相关论文
共 58 条
  • [1] [Anonymous], P CIVR
  • [2] [Anonymous], CALTECH 256 OBJECT C
  • [3] [Anonymous], ADV NEURAL INFORM PR
  • [4] Assembling large genomes with single-molecule sequencing and locality-sensitive hashing
    Berlin, Konstantin
    Koren, Sergey
    Chin, Chen-Shan
    Drake, James P.
    Landolin, Jane M.
    Phillippy, Adam M.
    [J]. NATURE BIOTECHNOLOGY, 2015, 33 (06) : 623 - +
  • [5] Cai D., 2016, APPL BIOCHEM BIOTECH, V181, P1
  • [6] Spatio-temporal transform based video hashing
    Coskun, Baris
    Sankur, Bulent
    Memon, Nasir
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2006, 8 (06) : 1190 - 1208
  • [7] Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
  • [8] Collective Matrix Factorization Hashing for Multimodal Data
    Ding, Guiguang
    Guo, Yuchen
    Zhou, Jile
    [J]. 2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2014, : 2083 - 2090
  • [9] Ge TZ, 2014, LECT NOTES COMPUT SC, V8695, P250, DOI 10.1007/978-3-319-10584-0_17
  • [10] Gionis A, 1999, PROCEEDINGS OF THE TWENTY-FIFTH INTERNATIONAL CONFERENCE ON VERY LARGE DATA BASES, P518