A joint learning framework for optimal feature extraction and multi-class SVM ☆

被引:13
作者
Lai, Zhihui [1 ,4 ]
Liang, Guangfei [1 ]
Zhou, Jie [1 ]
Kong, Heng [2 ]
Lu, Yuwu [3 ]
机构
[1] Shenzhen Univ, Coll Comp Sci & Software Engn, Shenzhen 518060, Guangdong, Peoples R China
[2] BaoAn Cent Hosp Shenzhen, Dept Breast & Thyroid Surg, Shenzhen 518102, Guangdong, Peoples R China
[3] South China Normal Univ, Sch Software, Foshan 528225, Guangdong, Peoples R China
[4] Guangdong Key Lab Intelligent Informat Proc, Shenzhen 518060, Guangdong, Peoples R China
关键词
Feature extraction and selection; Pattern recognition; Linear discriminant analysis; SVM classification; SUPPORT VECTOR MACHINE; DIMENSIONALITY REDUCTION;
D O I
10.1016/j.ins.2024.120656
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In high -dimensional data classification, effectively extracting discriminative features while eliminating redundancy is crucial for enhancing the performances of classifiers, such as Support Vector Machine (SVM). However, previous studies have decoupled the process of feature extraction from the development of SVM, leading to suboptimal classification accuracy. To address this problem, we propose a novel joint learning framework that combines optimal feature extraction and multi -class SVM, incorporating a generalized regression form to learn a discriminative latent subspace. The projected data in this subspace are more likely to have a larger margin between different classes and align with the properties of the SVM classification mechanism, enhancing the overall classification performance. Three iterative algorithms were presented to obtain optimal solutions with guaranteed convergence, and theoretical analyses were also conducted to reveal their fundamental nature. The optimal linear projection subspace is equivalent to that obtained from Linear Discriminant Analysis (LDA) in some special cases. We conducted extensive experiments using diverse datasets to evaluate the performances of the proposed algorithms. Our algorithms achieved an accuracy improvement of up to 7 .55% compared to other conventional methods.
引用
收藏
页数:15
相关论文
共 40 条
[1]   LDA-GA-SVM: improved hepatocellular carcinoma prediction through dimensionality reduction and genetically optimized support vector machine [J].
Ali, Liaqat ;
Wajahat, Iram ;
Golilarz, Noorbakhsh Amiri ;
Keshtkar, Fazel ;
Bukhari, Syed Ahmad Chan .
NEURAL COMPUTING & APPLICATIONS, 2021, 33 (07) :2783-2792
[2]   Conceptual and empirical comparison of dimensionality reduction algorithms (PCA, KPCA, LDA, MDS, SVD, LLE, ISOMAP, LE, ICA, t-SNE) [J].
Anowar, Farzana ;
Sadaoui, Samira ;
Selim, Bassant .
COMPUTER SCIENCE REVIEW, 2021, 40
[3]   A comparison of dimension reduction techniques for support vector machine modeling of multi-parameter manufacturing quality prediction [J].
Bai, Yun ;
Sun, Zhenzhong ;
Zeng, Bo ;
Long, Jianyu ;
Li, Lin ;
de Oliveira, Jose Valente ;
Li, Chuan .
JOURNAL OF INTELLIGENT MANUFACTURING, 2019, 30 (05) :2245-2256
[4]   Laplacian eigenmaps for dimensionality reduction and data representation [J].
Belkin, M ;
Niyogi, P .
NEURAL COMPUTATION, 2003, 15 (06) :1373-1396
[5]   Structural damage detection based on variational mode decomposition and kernel PCA-based support vector machine [J].
Bisheh, Hossein Babajanian ;
Amiri, Gholamreza Ghodrati .
ENGINEERING STRUCTURES, 2023, 278
[6]  
Bishop Christopher M., 2006, Pattern recognition and machine learning
[7]   Efficient kernel discriminant analysis via spectral regression [J].
Cai, Deng ;
He, Xiaofei ;
Han, Jiawei .
ICDM 2007: PROCEEDINGS OF THE SEVENTH IEEE INTERNATIONAL CONFERENCE ON DATA MINING, 2007, :427-432
[8]  
Cai X, 2013, 19TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING (KDD'13), P1124
[9]  
CORTES C, 1995, MACH LEARN, V20, P273, DOI 10.1023/A:1022627411411
[10]   On the algorithmic implementation of multiclass kernel-based vector machines [J].
Crammer, K ;
Singer, Y .
JOURNAL OF MACHINE LEARNING RESEARCH, 2002, 2 (02) :265-292