Deep Hashing with Triplet Loss of Hash Centers and Dissimilar Pairs for Image Retrieval

被引:0
作者
Liu, Ye [1 ,3 ,4 ,5 ]
Pan, Yan [1 ,5 ]
Yin, Jian [2 ,5 ]
机构
[1] Sun Yat Sen Univ, Sch Comp Sci & Engn, Guangzhou, Peoples R China
[2] Sun Yat Sen Univ, Sch Artificial Intelligence, Zhuhai, Peoples R China
[3] Lizhi Inc, Artificial Intelligence Dept, Beijing, Peoples R China
[4] Lizhi Inc, Big Data Dept, Beijing, Peoples R China
[5] Guangdong Key Lab Big Data Anal & Proc, Guangzhou, Peoples R China
来源
PROCEEDINGS OF THE 2024 27 TH INTERNATIONAL CONFERENCE ON COMPUTER SUPPORTED COOPERATIVE WORK IN DESIGN, CSCWD 2024 | 2024年
基金
中国国家自然科学基金;
关键词
deep hashing; image retrieval; hash center; triplet loss; pre-trained large model;
D O I
10.1109/CSCWD61410.2024.10580369
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Image hash coding learning based on deep neural network have a wide range of application scenarios and become a research hotspot in recent years. In the framework of most existing deep hashing methods, it is necessary to first design a loss function that can converge quickly, and then select an appropriate pre-training deep network and tune it. In order to balance global and local constraints, this paper combines the design of loss functions required by the processes of image feature learning and image hash coding, and adds the constraints of triple loss of hash centers and dissimilar pairs. The proposed novel loss function can give consideration to the characteristics of global hash centers, local dissimilarity of images and image classification labels in the training process. The pre-trained large models have been incorporated into the framework, which can extract image embeddings to assist in the generation of hash codes. The comparative experiments on several image datasets show that the DTLH method proposed in this paper can achieve better results than the traditional hashing methods and the deep hashing methods under different coding lengths.
引用
收藏
页码:2559 / 2564
页数:6
相关论文
共 30 条
[1]  
[Anonymous], 2012, Advances in Neural Information Processing Systems
[2]   Deep Cauchy Hashing for Hamming Space Retrieval [J].
Cao, Yue ;
Long, Mingsheng ;
Liu, Bin ;
Wang, Jianmin .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :1229-1237
[3]   HashNet: Deep Learning to Hash by Continuation [J].
Cao, Zhangjie ;
Long, Mingsheng ;
Wang, Jianmin ;
Yu, Philip S. .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :5609-5618
[4]  
Fan LX, 2020, PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, P825
[5]  
Gionis A, 1999, PROCEEDINGS OF THE TWENTY-FIFTH INTERNATIONAL CONFERENCE ON VERY LARGE DATA BASES, P518
[6]   Iterative Quantization: A Procrustean Approach to Learning Binary Codes for Large-Scale Image Retrieval [J].
Gong, Yunchao ;
Lazebnik, Svetlana ;
Gordo, Albert ;
Perronnin, Florent .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2013, 35 (12) :2916-2929
[7]   Recent advances in convolutional neural networks [J].
Gu, Jiuxiang ;
Wang, Zhenhua ;
Kuen, Jason ;
Ma, Lianyang ;
Shahroudy, Amir ;
Shuai, Bing ;
Liu, Ting ;
Wang, Xingxing ;
Wang, Gang ;
Cai, Jianfei ;
Chen, Tsuhan .
PATTERN RECOGNITION, 2018, 77 :354-377
[8]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778
[9]  
Krizhevsky A., 2009, Technical report
[10]   ImageNet Classification with Deep Convolutional Neural Networks [J].
Krizhevsky, Alex ;
Sutskever, Ilya ;
Hinton, Geoffrey E. .
COMMUNICATIONS OF THE ACM, 2017, 60 (06) :84-90