An optimized deep supervised hashing model for fast image retrieval

被引:3
|
作者
Hussain, Abid [1 ]
Li, Heng-Chao [1 ]
Ali, Danish [2 ]
Ali, Muqadar [1 ]
Abbas, Fakhar [3 ]
Hussain, Mehboob [1 ]
机构
[1] Southwest Jiao Tong Univ, Sch Informat & Comp Sci, Chengdu 611731, Peoples R China
[2] Dalian Univ Technol, Dept Math, Dalian, Peoples R China
[3] Natl Univ Singapore, Ctr Trusted Internet & Community, Singapore, Singapore
关键词
Knowledge distillation; Deep supervised hashing; Quantization; Network pruning; Image retrieval; NEURAL-NETWORK; FRAMEWORK;
D O I
10.1016/j.imavis.2023.104668
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
As multimedia data grows exponentially, searching for and retrieving a relevant image is becoming a challenge for researchers. Hashing is a widely adopted method because of its high performance in image retrieval with deep neural networks and multiple convolutional layers. Even so, most hashing methods ignore the computa-tional cost and memory storage consumption. When the deep hashing model size is large, it leads to a slowdown in the response time of the model compared to the small model. Addressing these issues, a novel optimized deep supervised hashing based on a teacher-student approach for swift and precise image retrieval is proposed in this paper. In this work, the small student model is trained using the knowledge distillation from the large teacher model and the information from the one-hot labels. Therefore, a weight allocation loss function based on the teacher and student models is defined. Meanwhile, we apply model pruning to decrease the amount of the student model further to increase the response time. Therefore, knowledge distillation is performed on the pruned model. After that, the remaining weights are quantized to reach the smaller size of the model. Extensive experimental outcomes on two widely used datasets prove the outstanding efficiency of our proposed method. (c) 2023 Elsevier B.V. All rights reserved.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] Deep Supervised Hashing for Fast Image Retrieval
    Liu, Haomiao
    Wang, Ruiping
    Shan, Shiguang
    Chen, Xilin
    2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 2064 - 2072
  • [2] Deep Supervised Hashing for Fast Image Retrieval
    Haomiao Liu
    Ruiping Wang
    Shiguang Shan
    Xilin Chen
    International Journal of Computer Vision, 2019, 127 : 1217 - 1234
  • [3] Deep Supervised Hashing for Fast Image Retrieval
    Liu, Haomiao
    Wang, Ruiping
    Shan, Shiguang
    Chen, Xilin
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2019, 127 (09) : 1217 - 1234
  • [4] Triplet Deep Hashing with Joint Supervised Loss for Fast Image Retrieval
    Li, Mingyong
    Wang, Hongya
    Wang, Liangliang
    Yang, Kaixiang
    Xiao, Yingyuan
    2019 IEEE 31ST INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI 2019), 2019, : 606 - 613
  • [5] Piecewise supervised deep hashing for image retrieval
    Yannuan Li
    Lin Wan
    Ting Fu
    Weijun Hu
    Multimedia Tools and Applications, 2019, 78 : 24431 - 24451
  • [6] Piecewise supervised deep hashing for image retrieval
    Li, Yannuan
    Wan, Lin
    Fu, Ting
    Hu, Weijun
    MULTIMEDIA TOOLS AND APPLICATIONS, 2019, 78 (17) : 24431 - 24451
  • [7] Robust Deep Supervised Hashing for Image Retrieval
    Mo, Zhaoguo
    Zhu, Yuesheng
    Zhan, Jiawei
    TWELFTH INTERNATIONAL CONFERENCE ON DIGITAL IMAGE PROCESSING (ICDIP 2020), 2020, 11519
  • [8] Angular Deep Supervised Hashing for Image Retrieval
    Zhou, Chang
    Po, Lai-Man
    Yuen, Wilson Y. F.
    Cheung, Kwok Wai
    Xu, Xuyuan
    Lau, Kin Wai
    Zhao, Yuzhi
    Liu, Mengyang
    Wong, Peter H. W.
    IEEE ACCESS, 2019, 7 : 127521 - 127532
  • [9] An Efficient Supervised Deep Hashing Method for Image Retrieval
    Hussain, Abid
    Li, Heng-Chao
    Ali, Muqadar
    Wali, Samad
    Hussain, Mehboob
    Rehman, Amir
    ENTROPY, 2022, 24 (10)
  • [10] Supervised deep hashing for scalable face image retrieval
    Tang, Jinhui
    Li, Zechao
    Zhu, Xiang
    PATTERN RECOGNITION, 2018, 75 : 25 - 32