Semisupervised Defect Segmentation With Pairwise Similarity Map Consistency and Ensemble-Based Cross Pseudolabels

被引:12
作者
Sime, Dejene M. [1 ]
Wang, Guotai [1 ]
Zeng, Zhi [1 ]
Wang, Wei [1 ]
Peng, Bei [1 ]
机构
[1] Univ Elect Sci & Technol China, Sch Mech & Elect Engn, Chengdu 611731, Peoples R China
关键词
Consistency regularization; defect segmentation; pairwise similarity; pseudolabels; semisupervised learning;
D O I
10.1109/TII.2022.3230785
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep-learning-based automatic defect segmentation is one of the hot research areas in computer vision application for the task of intelligent industrial inspection. Recently, several state-of-the-art models for image segmentation task have been proposed. However, their high performance is vastly dependent on the availability of large set of labeled data, which is one of the hindering factors in achieving full potential with deep learning methods in industrial inspection. In this article, we propose a novel method based on pairwise similarity map consistency with ensemble-based cross pseudolabels for semisupervised defect segmentation that uses limited labeled samples while exploiting additional label-free samples. The proposed approach uses three network branches that are regularized by pairwise similarity map consistency, and each of them is supervised by the pseudolabels generated by ensemble of predictions of the other two networks for the unlabeled samples. The proposed method achieved significant performance improvement over the baseline of learning only from the labeled images and the current state-of-the-art semisupervised methods. We perform ablation studies and extensive experiments on various parameters and components to demonstrate that our method achieved state-of-the-art results on three different datasets.
引用
收藏
页码:9535 / 9545
页数:11
相关论文
共 43 条
[1]   Mixed supervision for surface-defect detection: From weakly to fully supervised learning [J].
Bozic, Jakob ;
Tabernik, Domen ;
Skocaj, Danijel .
COMPUTERS IN INDUSTRY, 2021, 129
[2]   Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation [J].
Chen, Liang-Chieh ;
Zhu, Yukun ;
Papandreou, George ;
Schroff, Florian ;
Adam, Hartwig .
COMPUTER VISION - ECCV 2018, PT VII, 2018, 11211 :833-851
[3]   Semi-Supervised Semantic Segmentation with Cross Pseudo Supervision [J].
Chen, Xiaokang ;
Yuan, Yuhui ;
Zeng, Gang ;
Wang, Jingdong .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :2613-2622
[4]   PGA-Net: Pyramid Feature Fusion and Global Context Attention Network for Automated Surface Defect Detection [J].
Dong, Hongwen ;
Song, Kechen ;
He, Yu ;
Xu, Jing ;
Yan, Yunhui ;
Meng, Qinggang .
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2020, 16 (12) :7448-7458
[5]   Interactive defect segmentation in X-Ray images based on deep learning [J].
Du, Wangzhe ;
Shen, Hongyao ;
Zhang, Ge ;
Yao, Xinhua ;
Fu, Jianzhong .
EXPERT SYSTEMS WITH APPLICATIONS, 2022, 198
[6]   UCC: Uncertainty guided Cross-head Co-training for Semi-Supervised Semantic Segmentation [J].
Fan, Jiashuo ;
Gao, Bin ;
Jin, Huan ;
Jiang, Lihui .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, :9937-9946
[7]   A review of deep learning based methods for medical image multi-organ segmentation [J].
Fu, Yabo ;
Lei, Yang ;
Wang, Tonghe ;
Curran, Walter J. ;
Liu, Tian ;
Yang, Xiaofeng .
PHYSICA MEDICA-EUROPEAN JOURNAL OF MEDICAL PHYSICS, 2021, 85 :107-122
[8]   A semi-supervised convolutional neural network-based method for steel surface defect recognition [J].
Gao Yiping ;
Gao Liang ;
Li Xinyu ;
Yan Xuguo .
ROBOTICS AND COMPUTER-INTEGRATED MANUFACTURING, 2020, 61
[9]   A steel surface defect inspection approach towards smart industrial monitoring [J].
Hao, Ruiyang ;
Lu, Bingyu ;
Cheng, Ying ;
Li, Xiu ;
Huang, Biqing .
JOURNAL OF INTELLIGENT MANUFACTURING, 2021, 32 (07) :1833-1843
[10]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778