Fault classification method based on fast k-nearest neighbor with hybrid feature generation and K-medoids clustering

被引:0
作者
Zhou, Zhe [1 ]
Zeng, Fanliang [2 ]
Huang, Jiacheng [1 ]
Zheng, Jinhui [2 ]
Li, Zuxin [1 ]
机构
[1] Huzhou Univ, Sch Engn, Huzhou, Peoples R China
[2] Hangzhou Dianzi Univ, Sch Automat, Hangzhou, Peoples R China
来源
2020 35TH YOUTH ACADEMIC ANNUAL CONFERENCE OF CHINESE ASSOCIATION OF AUTOMATION (YAC) | 2020年
关键词
feature selection; feature extraction; clustering; fault classification; K-Nearest Neighbor; ALGORITHM;
D O I
10.1109/YAC51587.2020.9337657
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Fast and accurate classification of the process faults in industry is important for ensuring reliable operation. The traditional K-Nearest Neighbor (KNN) algorithm needs to calculate the distance between the sample to be classified and all the training samples obtained under normal operation condition (NOC). Large computational cost involves in this step when the number of the NOC samples is huge. Aiming at this, existing methods reduce the training samples by using the clustering algorithms, so as to reduce the computational cost of the KNN. However, the reduction of training samples usually leads to a decrease in the accuracy of fault classification. Misclassification directly affects the normal production and safety of the industrial processes. To this end, a fast KNN fault classification method based on hybrid feature generation and K-Medoids is proposed. Firstly, a hybrid feature generation method combining ReliefF algorithm and linear discriminant analysis algorithm is used to select and extract the sample features, thus enhance the separability inter-classes and improve the accuracy of fault classification. Then, K-Medoids clustering algorithm is used to select few representative training samples and reduce the computational complexity of KNN algorithm. Finally, the simulation is performed on the Tennessee-Eastman process, which verifies that the proposed algorithm is superior than other related four methods and only consumes far less running time than the basic KNN algorithm while retaining higher classification accuracy.
引用
收藏
页码:568 / 573
页数:6
相关论文
共 25 条
[1]   Analysis of K-Means and K-Medoids Algorithm For Big Data [J].
Arora, Preeti ;
Deepali ;
Varshney, Shipra .
1ST INTERNATIONAL CONFERENCE ON INFORMATION SECURITY & PRIVACY 2015, 2016, 78 :507-512
[2]   Hierarchical k-nearest neighbours classification and binary differential evolution for fault diagnostics of automotive bearings operating under variable conditions [J].
Baraldi, Piero ;
Cannarile, Francesco ;
Di Maio, Francesco ;
Zio, Enrico .
ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2016, 56 :1-13
[3]   Rotor Faults Diagnosis Using Feature Selection and Nearest Neighbors Rule: Application to a Turbogenerator [J].
Biet, Melisande .
IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2013, 60 (09) :4063-4073
[4]  
CHIANG LH, 2001, DATA DRIVEN TECHNIQU
[5]   Efficient kNN classification algorithm for big data [J].
Deng, Zhenyun ;
Zhu, Xiaoshu ;
Cheng, Debo ;
Zong, Ming ;
Zhang, Shichao .
NEUROCOMPUTING, 2016, 195 :143-148
[6]   A PLANT-WIDE INDUSTRIAL-PROCESS CONTROL PROBLEM [J].
DOWNS, JJ ;
VOGEL, EF .
COMPUTERS & CHEMICAL ENGINEERING, 1993, 17 (03) :245-255
[7]   Automatic text classification algorithm based on Gauss improved convolutional neural network [J].
Du, Jian-hai .
JOURNAL OF COMPUTATIONAL SCIENCE, 2017, 21 :195-200
[8]   Large-Scale Semiconductor Process Fault Detection Using a Fast Pattern Recognition-Based Method [J].
He, Qinghua Peter ;
Wang, Jin .
IEEE TRANSACTIONS ON SEMICONDUCTOR MANUFACTURING, 2010, 23 (02) :194-200
[9]   SVM and PCA based fault classification approaches for complicated industrial process [J].
Jing, Chen ;
Hou, Jian .
NEUROCOMPUTING, 2015, 167 :636-642
[10]  
KIRA K, 1992, AAAI-92 PROCEEDINGS : TENTH NATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE, P129