Semisupervised Contrastive Memory Network for Industrial Process Working Condition Monitoring

被引:0
作者
Tang, Zhaohui [1 ]
Zhang, Jin [1 ,2 ]
Xie, Yongfang [1 ]
Ding, Steven X. [3 ]
Ai, Mingxi [4 ]
机构
[1] Cent South Univ, Sch Automat, Changsha 410083, Peoples R China
[2] Kunming Univ Sci & Technol, Fac Informat Engn & Automat, Kunming 650500, Yunnan, Peoples R China
[3] Univ Duisburg Essen, Inst Automat Control & Complex Syst, D-47057 Duisburg, Germany
[4] Yunnan Univ, Sch Informat Sci & Engn, Kunming 650500, Yunnan, Peoples R China
基金
中国国家自然科学基金;
关键词
Training; Data models; Predictive models; Monitoring; Perturbation methods; Cognition; Automation; Computer vision; deep learning; memory network; process monitoring; semisupervised learning (SSL);
D O I
10.1109/TIM.2023.3311070
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Computer vision is now being used more frequently to monitor working conditions in various industries. However, labeling data for this purpose can be costly, which often leads to partially labeled datasets. To overcome this issue, there is a growing demand for semisupervised data-driven models that can utilize the abundance of unlabeled data available to improve monitoring performance. While there have been many methods developed to improve data efficiency, there has been limited focus on utilizing information from past iterations to further enhance performance. To this end, a semisupervised contrastive memory network is developed. The network guides embedding functions to map inputs to match its supporting memories learned in past iterations, and a mix-up unsupervised learning strategy, which integrates consistency regularization with mutual information, is designed to enable training of the network with unlabeled data. The experimental results show that the proposed method produces more discriminative representation and is beneficial to semisupervised learning. Notably, on froth flotation process monitoring with Inception-V3 as the backbone, it achieves 90.03% top-1 accuracy with 16% labeled data, which is comparable to the fully supervised method trained with the 100% labeled data, and largely outperforms existing semisupervised methods.
引用
收藏
页数:10
相关论文
共 29 条
[1]   Two-Stream Deep Feature-Based Froth Flotation Monitoring Using Visual Attention Clues [J].
Ai, Mingxi ;
Xie, Yongfang ;
Tang, Zhaohui ;
Zhang, Jin ;
Gui, Weihua .
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2021, 70
[2]   Pseudo-Labeling and Confirmation Bias in Deep Semi-Supervised Learning [J].
Arazo, Eric ;
Ortego, Diego ;
Albert, Paul ;
O'Connor, Noel E. ;
McGuinness, Kevin .
2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
[3]  
Berthelot D, 2020, INT C LEARN REPR
[4]  
Berthelot D, 2019, ADV NEUR IN, V32
[5]  
Blum A., 1998, Proceedings of the Eleventh Annual Conference on Computational Learning Theory, P92, DOI 10.1145/279943.279962
[6]   Semi-supervised Deep Learning with Memory [J].
Chen, Yanbei ;
Zhu, Xiatian ;
Gong, Shaogang .
COMPUTER VISION - ECCV 2018, PT I, 2018, 11205 :275-291
[7]   Sensor-Fault Detection, Isolation and Accommodation for Digital Twins via Modular Data-Driven Architecture [J].
Darvishi, Hossein ;
Ciuonzo, Domenico ;
Eide, Eivind Roson ;
Rossi, Pierluigi Salvo .
IEEE SENSORS JOURNAL, 2021, 21 (04) :4827-4838
[8]   Feature Concentration for Supervised and Semisupervised Learning With Unbalanced Datasets in Visual Inspection [J].
Jang, Jiyong ;
Yoon, Sungroh .
IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2021, 68 (08) :7620-7630
[9]  
Kaiser L., 2017, P INT C LEARN REPR I, P1
[10]  
Krizhevsky A., 2009, CIFAR-100 Dataset