Incorporating Symmetric Smooth Regularizations into Sparse Logistic Regression for Classification and Feature Extraction

被引:1
作者
Wang, Jing [1 ,2 ]
Xie, Xiao [1 ,2 ]
Wang, Pengwei [1 ,2 ]
Sun, Jian [1 ,2 ]
Liu, Yaochen [1 ,2 ]
Zhang, Li [3 ]
机构
[1] Xinyang Normal Univ, Sch Comp & Informat Technol, Xinyang 464000, Peoples R China
[2] Xinyang Normal Univ, Henan Key Lab Anal & Applicat Educ Big Data, Xinyang 464000, Peoples R China
[3] Nanjing Xiaozhuang Univ, Sch Early Childhood Educ, Nanjing 211171, Peoples R China
来源
SYMMETRY-BASEL | 2025年 / 17卷 / 02期
基金
中国国家自然科学基金;
关键词
logistic regression; classification; feature extraction; sparse regularization; symmetric smooth regularization; minorization-maximization; VARIABLE SELECTION; MODELS;
D O I
10.3390/sym17020151
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
This paper introduces logistic regression with sparse and smooth regularizations (LR-SS), a novel framework that simultaneously enhances both classification and feature extraction capabilities of standard logistic regression. By incorporating a family of symmetric smoothness constraints into sparse logistic regression, LR-SS uniquely preserves underlying structures inherent in structured data, distinguishing it from existing approaches. Within the minorization-maximization (MM) framework, we develop an efficient optimization algorithm that combines coordinate descent with soft-thresholding techniques. Through extensive experiments on both simulated and real-world datasets, including time series and image data, we demonstrate that LR-SS significantly outperforms conventional sparse logistic regression in classification tasks while providing more interpretable feature extraction. The results highlight LR-SS's ability to leverage sparse and symmetric smooth regularizations for capturing intrinsic data structures, making it particularly valuable for machine learning applications requiring both predictive accuracy and model interpretability.
引用
收藏
页数:36
相关论文
共 61 条
[1]   Laplacian eigenmaps for dimensionality reduction and data representation [J].
Belkin, M ;
Niyogi, P .
NEURAL COMPUTATION, 2003, 15 (06) :1373-1396
[2]   Risk assessment by failure mode and effects analysis (FMEA) using an interval number based logistic regression model [J].
Bhattacharjee, Pushparenu ;
Dey, Vidyut ;
Mandal, U. K. .
SAFETY SCIENCE, 2020, 132
[3]   Digital hand atlas and web-based bone age assessment: system design and implementation [J].
Cao, F ;
Huang, HK ;
Pietka, E ;
Gilsanz, V .
COMPUTERIZED MEDICAL IMAGING AND GRAPHICS, 2000, 24 (05) :297-307
[4]   Logistic regression and machine learning predicted patient mortality from large sets of diagnosis codes comparably [J].
Cowling, Thomas E. ;
Cromwell, David A. ;
Bellot, Alexis ;
Sharples, Linda D. ;
van der Meulen, Jan .
JOURNAL OF CLINICAL EPIDEMIOLOGY, 2021, 133 :43-52
[5]   BrainGB: A Benchmark for Brain Network Analysis With Graph Neural Networks [J].
Cui, Hejie ;
Dai, Wei ;
Zhu, Yanqiao ;
Kan, Xuan ;
Gu, Antonio Aodong Chen ;
Lukemire, Joshua ;
Zhan, Liang ;
He, Lifang ;
Guo, Ying ;
Yang, Carl .
IEEE TRANSACTIONS ON MEDICAL IMAGING, 2023, 42 (02) :493-506
[6]  
Davis L.M., 2013, Ph.D. Thesis
[7]   Combining sparseness and smoothness improves classification accuracy and interpretability [J].
de Brecht, Matthew ;
Yamagishi, Noriko .
NEUROIMAGE, 2012, 60 (02) :1550-1561
[8]  
Demajo LM, 2021, International Journal of Artificial Intelligence & Applications, V12, P19, DOI [10.5121/ijaia.2021.12102, 10.5121/ijaia.2021.12102, DOI 10.5121/IJAIA.2021.12102]
[9]  
Dohmatob ED, 2014, INT WORKSHOP PATTERN
[10]   DE-NOISING BY SOFT-THRESHOLDING [J].
DONOHO, DL .
IEEE TRANSACTIONS ON INFORMATION THEORY, 1995, 41 (03) :613-627