SAKD: Sparse attention knowledge distillation

被引:4
作者
Guo, Zhen [1 ,2 ]
Zhang, Pengzhou [1 ]
Liang, Peng [2 ]
机构
[1] Commun Univ China, State Key Lab Media Convergence & Commun, Dingfuzhuang East St 1, Beijing 100024, Peoples R China
[2] China Unicom Smart City Res Inst, Shoutinanlu 9, Beijing 100024, Peoples R China
关键词
Knowledge distillation; Attention mechanisms; Sparse attention mechanisms;
D O I
10.1016/j.imavis.2024.105020
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep learning techniques have gained significant interest due to their success in large model scenarios. However, large models often require massive computational resources, which can challenge end devices with limited storage capabilities. Transferring knowledge from big to small models and achieving similar results with limited resources requires further research. Knowledge distillation techniques, which involve using teacher-student models to migrate large model capabilities to small models, have been widely used in model compression and knowledge transfer. In this paper, a novel knowledge distillation approach is proposed, which utilizes the sparse attention mechanism (SAKD). SAKD computes attention using student features as queries and teacher features as key values and performs sparse attention values by random deactivation. Then, this sparse attention value is used to reweight the feature distance of each teacher-student feature pair to avoid negative transfer. Comprehensive experiments demonstrate the effectiveness and generality of our approach. Moreover, our SAKD method outperforms previous state-of-the-art methods on image classification tasks.
引用
收藏
页数:8
相关论文
共 50 条
[31]   Tea Buds Grading Method Based on Multiscale Attention Mechanism and Knowledge Distillation [J].
Huang H. ;
Chen X. ;
Han Z. ;
Fan Q. ;
Zhu Y. ;
Hu P. .
Nongye Jixie Xuebao/Transactions of the Chinese Society for Agricultural Machinery, 2022, 53 (09) :399-407and458
[32]   Why does Knowledge Distillation work? Rethink its attention and fidelity mechanism [J].
Guo, Chenqi ;
Zhong, Shiwei ;
Liu, Xiaofeng ;
Feng, Qianli ;
Ma, Yinglong .
EXPERT SYSTEMS WITH APPLICATIONS, 2025, 262
[33]   Improving adversarial robustness using knowledge distillation guided by attention information bottleneck [J].
Gong, Yuxin ;
Wang, Shen ;
Yu, Tingyue ;
Jiang, Xunzhi ;
Sun, Fanghui .
INFORMATION SCIENCES, 2024, 665
[34]   Multiscale knowledge distillation with attention based fusion for robust human activity recognition [J].
Yuan, Zhaohui ;
Yang, Zhengzhe ;
Ning, Hao ;
Tang, Xiangyang .
SCIENTIFIC REPORTS, 2024, 14 (01)
[35]   Class-adaptive attention transfer and multilevel entropy decoupled knowledge distillation [J].
Lu, Xincai ;
Sun, Zhanquan ;
Zou, Chenjie ;
He, Chun ;
Hu, Xinping .
MULTIMEDIA SYSTEMS, 2025, 31 (03)
[36]   Alignment Knowledge Distillation for Online Streaming Attention-Based Speech Recognition [J].
Inaguma, Hirofumi ;
Kawahara, Tatsuya .
IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2023, 31 :1371-1385
[37]   Knowledge Distillation in RNN-Attention Models for Early Prediction of Student Performance [J].
Leelaluk, Sukrit ;
Tang, Cheng ;
Svabensky, Valdemar ;
Shimada, Atsushi .
40TH ANNUAL ACM SYMPOSIUM ON APPLIED COMPUTING, 2025, :64-73
[38]   A method of knowledge distillation based on feature fusion and attention mechanism for complex traffic scenes [J].
Li, Cui-jin ;
Qu, Zhong ;
Wang, Sheng-ye .
ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2023, 124
[39]   Attention-Fused CNN Model Compression with Knowledge Distillation for Brain Tumor Segmentation [J].
Xu, Pengcheng ;
Kim, Kyungsang ;
Liu, Huafeng ;
Li, Quanzheng .
MEDICAL IMAGE UNDERSTANDING AND ANALYSIS, MIUA 2022, 2022, 13413 :328-338
[40]   Scale fusion light CNN for hyperspectral face recognition with knowledge distillation and attention mechanism [J].
Niu, Jie-Yi ;
Xie, Zhi-Hua ;
Li, Yi ;
Cheng, Si-Jia ;
Fan, Jia-Wei .
APPLIED INTELLIGENCE, 2022, 52 (06) :6181-6195