Unsupervised Deep Hashing with Similarity-Adaptive and Discrete Optimization

被引:300
作者
Shen, Fumin [1 ,2 ]
Xu, Yan [1 ,2 ]
Liu, Li [3 ]
Yang, Yang [1 ,2 ]
Huang, Zi [4 ]
Shen, Heng Tao [1 ,2 ]
机构
[1] Univ Elect Sci & Technol China, Ctr Future Media, Chengdu 611731, Sichuan, Peoples R China
[2] Univ Elect Sci & Technol China, Sch Comp Sci & Engn, Chengdu 611731, Sichuan, Peoples R China
[3] Univ East Anglia, Sch Comp Sci, Norwich NR4 7TJ, Norfolk, England
[4] Univ Queensland, Sch Informat Technol & Elect Engn, St Lucia, Qld 4072, Australia
基金
澳大利亚研究理事会; 中国国家自然科学基金;
关键词
Binary codes; unsupervised deep hashing; image retrieval; BINARY-CODES; REPRESENTATION; MACHINES;
D O I
10.1109/TPAMI.2018.2789887
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent vision and learning studies show that learning compact hash codes can facilitate massive data processing with significantly reduced storage and computation. Particularly, learning deep hash functions has greatly improved the retrieval performance, typically under the semantic supervision. In contrast, current unsupervised deep hashing algorithms can hardly achieve satisfactory performance due to either the relaxed optimization or absence of similarity-sensitive objective. In this work, we propose a simple yet effective unsupervised hashing framework, named Similarity-Adaptive Deep Hashing (SADH), which alternatingly proceeds over three training modules: deep hash model training, similarity graph updating and binary code optimization. The key difference from the widely-used two-step hashing method is that the output representations of the learned deep model help update the similarity graph matrix, which is then used to improve the subsequent code optimization. In addition, for producing high-quality binary codes, we devise an effective discrete optimization algorithm which can directly handle the binary constraints with a general hashing loss. Extensive experiments validate the efficacy of SADH, which consistently outperforms the state-of-the-arts by large gaps.
引用
收藏
页码:3034 / 3044
页数:11
相关论文
共 59 条
  • [1] [Anonymous], ARXIV160407666
  • [2] [Anonymous], 2016, PROC CVPR IEEE, DOI DOI 10.1109/CVPR.2016.553
  • [3] [Anonymous], 2011, 28 INT C INT C MACH
  • [4] [Anonymous], 2016, ARXIV PREPRINT ARXIV
  • [5] [Anonymous], 2017, CVPR
  • [6] [Anonymous], 2009, P ACM INT C IMAGE VI
  • [7] Distributed optimization and statistical learning via the alternating direction method of multipliers
    Boyd S.
    Parikh N.
    Chu E.
    Peleato B.
    Eckstein J.
    [J]. Foundations and Trends in Machine Learning, 2010, 3 (01): : 1 - 122
  • [8] Adaptive Hashing for Fast Similarity Search
    Cakir, Fatih
    Sclaroff, Stan
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 1044 - 1052
  • [9] HashNet: Deep Learning to Hash by Continuation
    Cao, Zhangjie
    Long, Mingsheng
    Wang, Jianmin
    Yu, Philip S.
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 5609 - 5618
  • [10] Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848