Fast and accurate image retrieval using knowledge distillation from multiple deep pre-trained networks

被引:2
|
作者
Salman, Hasan [1 ]
Taherinia, Amir Hossein [1 ]
Zabihzadeh, Davood [2 ]
机构
[1] Ferdowsi Univ Mashhad, Fac Engn, Comp Engn Dept, Mashhad, Iran
[2] Hakim Sabzevari Univ, Dept Comp Engn, Sabzevar, Iran
关键词
Information retrieval; Knowledge distillation; Model quantization; Semantic hash coding; Attention mechanism; SCALE; ROTATION; PATTERN;
D O I
10.1007/s11042-023-14761-y
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The content retrieval systems aim to retrieve images similar to a query image from a large data set. A feature extractor and similarity measure play a key role in these systems. Hand-crafted feature descriptors like SURF, SIFT, and GIST find a suitable pattern for measuring the similarity between images. Recently deep learning in this field has been given much attention, which performs feature extraction and similarity learning simultaneously. Various research shows that the feature vector extracted from pre-trained networks contains richer information than class labels in classifying or retrieving information. This paper presents an effective method, Deep Muti-teacher Transfer Hash (DMTH), which uses knowledge from several complex models to teach a simple one. Due to the variety of available pre-trained models and the diversity among their extracted features, we utilize an attention mechanism to obtain richer features from them to teach a simple model via an appropriate knowledge distillation loss. We test our method on widely used datasets Cifar10 & Cifar100 and compare our method with other state-of-the-art methods. The experimental results show that DMTH can improve the image retrieval performance by learning better features obtained through an attention mechanism from multiple teachers without increasing evaluation time. Specifically, the proposed multi-teacher model surpasses the best individual teacher by 2% in terms of accuracy on Cifar10. Meanwhile, it boosts the performance of the student model by more than 4% using our knowledge transfer mechanism.
引用
收藏
页码:33937 / 33959
页数:23
相关论文
共 27 条
  • [1] Fast and accurate image retrieval using knowledge distillation from multiple deep pre-trained networks
    Hasan Salman
    Amir Hossein Taherinia
    Davood Zabihzadeh
    Multimedia Tools and Applications, 2023, 82 : 33937 - 33959
  • [2] Explanation Guided Knowledge Distillation for Pre-trained Language Model Compression
    Yang, Zhao
    Zhang, Yuanzhe
    Sui, Dianbo
    Ju, Yiming
    Zhao, Jun
    Liu, Kang
    ACM TRANSACTIONS ON ASIAN AND LOW-RESOURCE LANGUAGE INFORMATION PROCESSING, 2024, 23 (02)
  • [3] KNOWLEDGE DISTILLATION FOR NEURAL TRANSDUCERS FROM LARGE SELF-SUPERVISED PRE-TRAINED MODELS
    Yang, Xiaoyu
    Li, Qiujia
    Woodland, Philip C.
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 8527 - 8531
  • [4] AdaDS: Adaptive data selection for accelerating pre-trained language model knowledge distillation
    Zhou, Qinhong
    Li, Peng
    Liu, Yang
    Guan, Yuyang
    Xing, Qizhou
    Chen, Ming
    Sun, Maosong
    Liu, Yang
    AI OPEN, 2023, 4 : 56 - 63
  • [5] Grand: A Fast and Accurate Graph Retrieval Framework via Knowledge Distillation
    Lan, Lin
    Wang, Pinghui
    Shi, Rui
    Liu, Tingqing
    Zeng, Juxiang
    Sun, Feiyang
    Ren, Yang
    Tao, Jing
    Guan, Xiaohong
    PROCEEDINGS OF THE 47TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2024, 2024, : 1639 - 1648
  • [6] Knowledge Transfer from Pre-trained Language Models to Cif-based Speech Recognizers via Hierarchical Distillation
    Han, Minglun
    Chen, Feilong
    Shi, Jing
    Xu, Shuang
    Xu, Bo
    INTERSPEECH 2023, 2023, : 1364 - 1368
  • [7] Oversea Cross-Lingual Summarization Service in Multilanguage Pre-Trained Model through Knowledge Distillation
    Yang, Xiwei
    Yun, Jing
    Zheng, Bofei
    Liu, Limin
    Ban, Qi
    ELECTRONICS, 2023, 12 (24)
  • [8] Deep Learning of Pre-Classification for Fast Image Retrieval
    Liu, Fan
    Wang, Bin
    Zhang, Qian
    2018 INTERNATIONAL CONFERENCE ON ALGORITHMS, COMPUTING AND ARTIFICIAL INTELLIGENCE (ACAI 2018), 2018,
  • [9] Fast and Accurate Facial Expression Image Classification and Regression Method Based on Knowledge Distillation
    Lee, Kunyoung
    Kim, Seunghyun
    Lee, Eui Chul
    APPLIED SCIENCES-BASEL, 2023, 13 (11):
  • [10] One-Step Knowledge Distillation and Fine-Tuning in Using Large Pre-Trained Self-Supervised Learning Models for Speaker Verification
    Heo, Jungwoo
    Lim, Chan-yeong
    Kim, Ju-ho
    Shin, Hyun-seo
    Yu, Ha-Jin
    INTERSPEECH 2023, 2023, : 5271 - 5275