Unsupervised Semantic-Preserving Adversarial Hashing for Image Search

被引:145
作者
Deng, Cheng [1 ]
Yang, Erkun [1 ]
Liu, Tongliang [2 ]
Li, Jie [1 ]
Liu, Wei [3 ]
Tao, Dacheng [2 ]
机构
[1] Xidian Univ, Sch Elect Engn, Xian 710071, Shaanxi, Peoples R China
[2] Univ Sydney, UBTECH Sydney Artificial Intelligence Ctr, Sch Comp Sci, Fac Engn & Informat Technol, Sydney, NSW 2008, Australia
[3] Tencent AI Lab, Shenzhen 518057, Peoples R China
基金
澳大利亚研究理事会; 国家重点研发计划; 中国国家自然科学基金;
关键词
Hashing; image search; adversarial learning; deep learning; NEAREST-NEIGHBOR;
D O I
10.1109/TIP.2019.2903661
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Hashing plays a pivotal role in nearest-neighbor searching for large-scale image retrieval. Recently, deep learning-based hashing methods have achieved promising performance. However, most of these deep methods involve discriminative models, which require large-scale, labeled training datasets, thus hindering their real-world applications. In this paper, we propose a novel strategy to exploit the semantic similarity of the training data and design an efficient generative adversarial framework to learn binary hash codes in an unsupervised manner. Specifically, our model consists of three different neural networks: an encoder network to learn hash codes from images, a generative network to generate images from hash codes, and a discriminative network to distinguish between pairs of hash codes and images. By adversarially training these networks, we successfully learn mutually coherent encoder and generative networks, and can output efficient hash codes from the encoder network. We also propose a novel strategy, which utilizes both feature and neighbor similarities, to construct a semantic similarity matrix, then use this matrix to guide the hash code learning process. Integrating the supervision of this semantic similarity matrix into the adversarial learning framework can efficiently preserve the semantic information of training data in Hamming space. The experimental results on three widely used benchmarks show that our method not only significantly outperforms several state-of-the-art unsupervised hashing methods, but also achieves comparable performance with popular supervised hashing methods.
引用
收藏
页码:4032 / 4044
页数:13
相关论文
共 72 条
[1]  
Abadi M., 2015, TensorFlow: Large-scale machine learning on heterogeneous systems
[2]  
Andoni A, 2006, ANN IEEE SYMP FOUND, P459
[3]  
[Anonymous], IEEE T CYBERN
[4]  
[Anonymous], P 3 INT C LEARNING R
[5]  
[Anonymous], PROC CVPR IEEE
[6]  
[Anonymous], 2012, ICML
[7]  
[Anonymous], EXPRESSIVE POWER PAR
[8]  
[Anonymous], PROC CVPR IEEE
[9]  
[Anonymous], CVPR
[10]  
[Anonymous], IEEE T EVOL COMPUT