Fast and accurate image retrieval using knowledge distillation from multiple deep pre-trained networks

被引:2
|
作者
Salman, Hasan [1 ]
Taherinia, Amir Hossein [1 ]
Zabihzadeh, Davood [2 ]
机构
[1] Ferdowsi Univ Mashhad, Fac Engn, Comp Engn Dept, Mashhad, Iran
[2] Hakim Sabzevari Univ, Dept Comp Engn, Sabzevar, Iran
关键词
Information retrieval; Knowledge distillation; Model quantization; Semantic hash coding; Attention mechanism; SCALE; ROTATION; PATTERN;
D O I
10.1007/s11042-023-14761-y
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The content retrieval systems aim to retrieve images similar to a query image from a large data set. A feature extractor and similarity measure play a key role in these systems. Hand-crafted feature descriptors like SURF, SIFT, and GIST find a suitable pattern for measuring the similarity between images. Recently deep learning in this field has been given much attention, which performs feature extraction and similarity learning simultaneously. Various research shows that the feature vector extracted from pre-trained networks contains richer information than class labels in classifying or retrieving information. This paper presents an effective method, Deep Muti-teacher Transfer Hash (DMTH), which uses knowledge from several complex models to teach a simple one. Due to the variety of available pre-trained models and the diversity among their extracted features, we utilize an attention mechanism to obtain richer features from them to teach a simple model via an appropriate knowledge distillation loss. We test our method on widely used datasets Cifar10 & Cifar100 and compare our method with other state-of-the-art methods. The experimental results show that DMTH can improve the image retrieval performance by learning better features obtained through an attention mechanism from multiple teachers without increasing evaluation time. Specifically, the proposed multi-teacher model surpasses the best individual teacher by 2% in terms of accuracy on Cifar10. Meanwhile, it boosts the performance of the student model by more than 4% using our knowledge transfer mechanism.
引用
收藏
页码:33937 / 33959
页数:23
相关论文
共 27 条
  • [21] Knowledge Distillation of Attention and Residual U-Net: Transfer from Deep to Shallow Models for Medical Image Classification
    Liao, Zhifang
    Dong, Quanxing
    Ge, Yifan
    Liu, Wenlong
    Chen, Huaiyi
    Song, Yucheng
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT XIII, 2024, 14437 : 162 - 173
  • [22] Disentangling the intrinsic feature from the related feature in image classification using knowledge distillation and object replacement
    Lu, Zhenyu
    Lu, Yonggang
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 246
  • [23] A novel lightweight deep learning based approaches for the automatic diagnosis of gastrointestinal disease using image processing and knowledge distillation techniques
    Waheed, Zafran
    Gui, Jinsong
    Bin Heyat, Md Belal
    Parveen, Saba
    Bin Hayat, Mohd Ammar
    Iqbal, Muhammad Shahid
    Aya, Zouheir
    Nawabi, Awais Khan
    Sawan, Mohamad
    COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 2025, 260
  • [24] Efficient hyperspectral image segmentation for biosecurity scanning using knowledge distillation from multi-head teacher
    Minh Hieu Phan
    Phung, Son Lam
    Luu, Khoa
    Bouzerdoum, Abdesselam
    NEUROCOMPUTING, 2022, 504 : 189 - 203
  • [25] Attention-based Bidirectional Long Short-Term Memory Networks for Relation Classification Using Knowledge Distillation from BERT
    Wang, Zihan
    Yang, Bo
    2020 IEEE INTL CONF ON DEPENDABLE, AUTONOMIC AND SECURE COMPUTING, INTL CONF ON PERVASIVE INTELLIGENCE AND COMPUTING, INTL CONF ON CLOUD AND BIG DATA COMPUTING, INTL CONF ON CYBER SCIENCE AND TECHNOLOGY CONGRESS (DASC/PICOM/CBDCOM/CYBERSCITECH), 2020, : 562 - 568
  • [26] PA-Seg: Learning From Point Annotations for 3D Medical Image Segmentation Using Contextual Regularization and Cross Knowledge Distillation
    Zhai, Shuwei
    Wang, Guotai
    Luo, Xiangde
    Yue, Qiang
    Li, Kang
    Zhang, Shaoting
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2023, 42 (08) : 2235 - 2246
  • [27] Classification of multiple emotional states from facial expressions in head-fixed mice using a deep learning-based image analysis
    Tanaka, Yudai
    Nakata, Takuto
    Hibino, Hiroshi
    Nishiyama, Masaaki
    Ino, Daisuke
    PLOS ONE, 2023, 18 (07):