Learning Hash Codes with Listwise Supervision

被引:102
作者
Wang, Jun [1 ]
Liu, Wei [2 ]
Sun, Andy X. [3 ]
Jiang, Yu-Gang [4 ]
机构
[1] IBM TJ Watson Res Ctr, Business Analyt & Math Sci, Yorktown Hts, NY 10598 USA
[2] IBM TJ Watson Res Ctr, Multimedia Analyt, Yorktown Hts, NY USA
[3] Georgia Inst Technol, Sch Ind & Syst Engn, Atlanta, GA 30332 USA
[4] Fudan Univ, Sch Comp Sci, Shanghai, Peoples R China
来源
2013 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV) | 2013年
关键词
D O I
10.1109/ICCV.2013.377
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Hashing techniques have been intensively investigated in the design of highly efficient search engines for large-scale computer vision applications. Compared with prior approximate nearest neighbor search approaches like tree-based indexing, hashing-based search schemes have prominent advantages in terms of both storage and computational efficiencies. Moreover, the procedure of devising hash functions can be easily incorporated into sophisticated machine learning tools, leading to data-dependent and task-specific compact hash codes. Therefore, a number of learning paradigms, ranging from unsupervised to supervised, have been applied to compose appropriate hash functions. However, most of the existing hash function learning methods either treat hash function design as a classification problem or generate binary codes to satisfy pairwise supervision, and have not yet directly optimized the search accuracy. In this paper, we propose to leverage listwise supervision into a principled hash function learning framework. In particular, the ranking information is represented by a set of rank triplets that can be used to assess the quality of ranking. Simple linear projection-based hash functions are solved efficiently through maximizing the ranking quality over the training data. We carry out experiments on large image datasets with size up to one million and compare with the state-of-the-art hashing techniques. The extensive results corroborate that our learned hash codes via listwise supervision can provide superior search accuracy without incurring heavy computational overhead.
引用
收藏
页码:3032 / 3039
页数:8
相关论文
共 27 条
[1]  
[Anonymous], 2009, NIPS
[2]  
[Anonymous], 2005, Learning Task-Specific Similarity
[3]  
Bai B, 2009, LECT NOTES COMPUT SC, V5478, P761
[4]  
Bertsekas D.P., 2014, Constrained Optimization and Lagrange Multiplier Methods
[5]  
CHUA TS, 2009, P ACM CIVR SANT GREE
[6]  
Eshghi Kave., 2008, Proceedings of the 14th ACM SIGKDD. ACM, P221, DOI [10.1145/1401890.1401921, DOI 10.1145/1401890.1401921]
[7]  
Gionis A, 1999, PROCEEDINGS OF THE TWENTY-FIFTH INTERNATIONAL CONFERENCE ON VERY LARGE DATA BASES, P518
[8]  
Gong YC, 2011, PROC CVPR IEEE, P817, DOI 10.1109/CVPR.2011.5995432
[9]  
J arvelin K., 2000, P 23 ANN INT ACM SIG, P41, DOI DOI 10.1145/345508.345545
[10]  
Krizhevsky A., 2009, Learning Multiple Layers of Features from Tiny Images, DOI DOI 10.1145/3079856.3080246