Abnormality Detection of Blast Furnace Tuyere Based on Knowledge Distillation and a Vision Transformer

被引:2
作者
Song, Chuanwang [1 ]
Zhang, Hao [1 ]
Wang, Yuanjun [1 ]
Wang, Yuhui [1 ]
Hu, Keyong [1 ]
机构
[1] Qingdao Univ Technol, Sch Informat & Control Engn, Qingdao 266520, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2023年 / 13卷 / 18期
关键词
transformer; cnn; knowledge distillation; self-attention mechanism; image classification;
D O I
10.3390/app131810398
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
The blast furnace tuyere is a key position in hot metal production and is primarily observed to assess the internal state of the furnace. However, detecting abnormal tuyere conditions has relied heavily on manual judgment, leading to certain limitations. We proposed a tuyere abnormality detection model based on knowledge distillation and a vision transformer (ViT) to address this issue. In this approach, ResNet50 is employed as the Teacher model to distill knowledge into the Student model, ViT. Firstly, we introduced spatial attention modules to enhance the model's perception and feature-extraction capabilities for different image regions. Furthermore, we simplified the depth of the ViT and improved its self-attention mechanism to alleviate training losses. In addition, we employed the knowledge distillation strategy to achieve model lightweighting and enhance the model's generalization capability. Finally, we evaluate the model's performance in tuyere abnormality detection and compare it with other classification methods such as VGG-19, ResNet-101, and ResNet-50. Experimental results showed that our model achieved a classification accuracy of 97.86% on a tuyere image dataset from a company, surpassing the original ViT model by 1.12% and the improved ViT model without knowledge distillation by 0.34%. Meanwhile, the model achieved a competitive classification accuracy of 90.31% and 77.65% on the classical fine-grained image datasets, Stanford Dogs and CUB-200-2011, respectively, comparable to other classification models.
引用
收藏
页数:15
相关论文
共 50 条
[21]   Strabismus Detection Based on Uncertainty Estimation and Knowledge Distillation [J].
Rong, Yibiao ;
Yang, Ziyin ;
Zheng, Ce ;
Fan, Zhun .
Journal of Beijing Institute of Technology (English Edition), 2024, 33 (05) :399-411
[22]   Medical image classification: Knowledge transfer via residual U-Net and vision transformer-based teacher-student model with knowledge distillation [J].
Song, Yucheng ;
Wang, Jincan ;
Ge, Yifan ;
Li, Lifeng ;
Guo, Jia ;
Dong, Quanxing ;
Liao, Zhifang .
JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2024, 102
[23]   Automated Dead Chicken Detection in Poultry Farms Using Knowledge Distillation and Vision Transformers [J].
Khanal, Ridip ;
Wu, Wenqin ;
Lee, Joonwhoan .
APPLIED SCIENCES-BASEL, 2025, 15 (01)
[24]   A transformer-based low-resolution face recognition method via on-and-offline knowledge distillation [J].
Song, Yaozhe ;
Tang, Hongying ;
Meng, Fangzhou ;
Wang, Chaoyi ;
Wu, Mengmeng ;
Shu, Ziting ;
Tong, Guanjun .
NEUROCOMPUTING, 2022, 509 :193-205
[25]   DeNKD: Decoupled Non-Target Knowledge Distillation for Complementing Transformer-Based Unsupervised Domain Adaptation [J].
Mei, Zhen ;
Ye, Peng ;
Li, Baopu ;
Chen, Tao ;
Fan, Jiayuan ;
Ouyang, Wanli .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (05) :3220-3231
[26]   Vision-Based Assistance for Vocal Fold Identification in Laryngoscopy with Knowledge Distillation [J].
Dao, Thao Thi Phuong ;
Pham, Minh-Khoi ;
Tran, Mai-Khiem ;
Ha, Chanh Cong ;
Van, Boi Ngoc ;
Tran, Bich Anh ;
Tran, Minh-Triet .
MEDINFO 2023 - THE FUTURE IS ACCESSIBLE, 2024, 310 :946-950
[27]   Efficient Lightweight Network with Transformer-Based Distillation for Micro-crack Detection of Solar Cells [J].
Xie, Xiangying ;
Liu, Xinyue ;
Chen, QiXiang ;
Leng, Biao .
NEURAL INFORMATION PROCESSING, ICONIP 2023, PT III, 2024, 14449 :3-15
[28]   Anomaly detection based on multi-teacher knowledge distillation [J].
Ma, Ye ;
Jiang, Xu ;
Guan, Nan ;
Yi, Wang .
JOURNAL OF SYSTEMS ARCHITECTURE, 2023, 138
[29]   Forest Fire Object Detection Analysis Based on Knowledge Distillation [J].
Xie, Jinzhou ;
Zhao, Hongmin .
FIRE-SWITZERLAND, 2023, 6 (12)
[30]   Knowledge Distillation and Transformer-Based Framework for Automatic Spine CT Report Generation [J].
Batool, Humaira ;
Mukhtar, Asmat ;
Gul Khawaja, Sajid ;
Alghamdi, Norah Saleh ;
Mansoor Khan, Asad ;
Qayyum, Adil ;
Adil, Ruqqayia ;
Khan, Zawar ;
Usman Akram, Muhammad ;
Usman Akbar, Muhammad ;
Eklund, Anders .
IEEE ACCESS, 2025, 13 :42949-42964