Semi-supervised Lightweight Fabric Defect Detection

被引:0
作者
Dong, Xiaoliang [1 ]
Liu, Hao [1 ]
Luo, Yuexin [1 ]
Yan, Yubao [1 ]
Liang, Jiuzhen [1 ]
机构
[1] Changzhou Univ, Changzhou 213164, Peoples R China
来源
PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2024, PT IV | 2025年 / 15034卷
关键词
Fabric defect detection; Sel-fill; Semi-supervised; Lightweight;
D O I
10.1007/978-981-97-8505-6_8
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Fabric defect detection can greatly enhance the quality of fabric production. However, the high cost of annotating defects and the computational complexity of networks are the main challenges in defect detection. To address these challenges, this paper proposes a semi-supervised lightweight fabric defect detection algorithm (SDA-Net). During the semi-supervised training process, the algorithm uses labeled defect samples and normal samples to learn latent features and detect defect positions accurately. First, to solve the issue of insufficient labeled defect samples due to high annotation costs, a data augmentation method called Sel-fill is proposed. The Sel-fill randomly samples image blocks of various sizes from a truncated normal distribution. These image blocks are then inserted into random positions within normal images, thereby generating labeled defect samples. Second, A lightweight neural network architecture is constructed using depth-wise separable convolution (DSConv). This architecture effectively reduces the number of parameters and computations while maintaining performance. Final, the max pooling coordinate attention mechanism (MpCA) effectively suppresses background noise during the multi-scale feature fusion process, resulting in improved detection precision. By using depth-wise separable convolution and MpCA attention, SDA-Net achieves an average detection precision of 62.6%, improved by 4.5% over the previous method. The number of trainable parameters is only 9.35 MB, reduced by 42.53%. Moreover, the computations are reduced by 68.84%.
引用
收藏
页码:106 / 120
页数:15
相关论文
共 50 条
[21]   DIOD: Fast, Semi-Supervised Deep ISAR Object Detection [J].
Xue, Bin ;
Tong, Ningning ;
Xu, Xin .
IEEE SENSORS JOURNAL, 2019, 19 (03) :1073-1081
[22]   A Semi-supervised Intrusion Detection Algorithm Based on Natural Neighbor [J].
Zhu, Qing-Sheng ;
Fang, Qi .
2016 INTERNATIONAL CONFERENCE ON INFORMATION SYSTEM AND ARTIFICIAL INTELLIGENCE (ISAI 2016), 2016, :423-426
[23]   Semi-supervised anchorless single-engine grip detection [J].
Shi, Yun ;
Zhang, Gang ;
Kong, Min ;
Fang, Jie .
MEASUREMENT & CONTROL, 2025, 58 (04) :435-441
[24]   Semi-Supervised Detection of Tariff Limits in LTE Network Benchmarks [J].
Eller, Lukas ;
Svohoda, Philipp ;
Rupp, Markus .
2020 IEEE 91ST VEHICULAR TECHNOLOGY CONFERENCE, VTC2020-SPRING, 2020,
[25]   Semi-supervised Variational Multi-view Anomaly Detection [J].
Wang, Shaoshen ;
Chen, Ling ;
Hussain, Farookh ;
Zhang, Chengqi .
WEB AND BIG DATA, APWEB-WAIM 2021, PT I, 2021, 12858 :125-133
[26]   Network Intrusion Detection Based on Active Semi-supervised Learning [J].
Zhang, Yong ;
Niu, Jie ;
He, Guojian ;
Zhu, Lin ;
Guo, Da .
51ST ANNUAL IEEE/IFIP INTERNATIONAL CONFERENCE ON DEPENDABLE SYSTEMS AND NETWORKS (DSN-W 2021), 2021, :129-135
[27]   Semi-Supervised Recursive Autoencoders for Social Review Spam Detection [J].
Wang, Baohua ;
Huang, Junlian ;
Zheng, Haihong ;
Wu, Hui .
PROCEEDINGS OF 2016 12TH INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND SECURITY (CIS), 2016, :116-119
[28]   Multi-granularity Distillation Scheme Towards Lightweight Semi-supervised Semantic Segmentation [J].
Qin, Jie ;
Wu, Jie ;
Li, Ming ;
Xiao, Xuefeng ;
Zheng, Min ;
Wang, Xingang .
COMPUTER VISION - ECCV 2022, PT XXX, 2022, 13690 :481-498
[29]   SEMI-SUPERVISED SUBSPACE SEGMENTATION [J].
Wang, Dong ;
Yin, Qiyue ;
He, Ran ;
Wang, Liang ;
Tan, Tieniu .
2014 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2014, :2854-2858
[30]   A Novel Involution-Based Lightweight Network for Fabric Defect Detection [J].
Ke, Zhenxia ;
Yu, Lingjie ;
Zhi, Chao ;
Xue, Tao ;
Zhang, Yuming .
INFORMATION, 2025, 16 (05)