Metafeature Selection via Multivariate Sparse-Group Lasso Learning for Automatic Hyperparameter Configuration Recommendation

被引:3
|
作者
Deng, Liping [1 ]
Chen, Wen-Sheng [2 ]
Xiao, Mingqing [1 ]
机构
[1] Southern Illinois Univ Carbondale, Sch Math & Stat Sci, Carbondale, IL 62901 USA
[2] Shenzhen Univ, Coll Math & Stat, Shenzhen 618060, Guangdong, Peoples R China
基金
美国国家科学基金会;
关键词
Classification algorithms; Task analysis; Metadata; Support vector machines; Optimization; Kernel; Feature extraction; Automatic hyperparameter recommendation; metafeature selection; metalearning (MtL); multivariate sparse-group Lasso (SGLasso); META; REGRESSION; ALGORITHMS; SEARCH;
D O I
10.1109/TNNLS.2023.3263506
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The performance of classification algorithms is mainly governed by the hyperparameter settings deployed in applications, and the search for desirable hyperparameter configurations usually is quite challenging due to the complexity of datasets. Metafeatures are a group of measures that characterize the underlying dataset from various aspects, and the corresponding recommendation algorithm fully relies on the appropriate selection of metafeatures. Metalearning (MtL), aiming to improve the learning algorithm itself, requires development in integrating features, models, and algorithm learning to accomplish its goal. In this article, we develop a multivariate sparse-group Lasso (SGLasso) model embedded with MtL capacity in recommending suitable configurations via learning. The main idea is to select the principal metafeatures by removing those redundant or irregular ones, promoting both efficiency and performance in the hyperparameter configuration recommendation. To be specific, we first extract the metafeatures and classification performance of a set of configurations from the collection of historical datasets, and then, a metaregression task is established through SGLasso to capture the main characteristics of the underlying relationship between metafeatures and historical performance. For a new dataset, the classification performance of configurations can be estimated through the selected metafeatures so that the configuration with the highest predictive performance in terms of the new dataset can be generated. Furthermore, a general MtL architecture combined with our model is developed. Extensive experiments are conducted on 136 UCI datasets, demonstrating the effectiveness of the proposed approach. The empirical results on the well-known SVM show that our model can effectively recommend suitable configurations and outperform the existing MtL-based methods and the well-known search-based algorithms, such as random search, Bayesian optimization, and Hyperband.
引用
收藏
页码:12540 / 12552
页数:13
相关论文
共 13 条
  • [1] Hyperparameter recommendation via automated meta-feature selection embedded with kernel group Lasso learning
    Deng, Liping
    Xiao, Mingqing
    KNOWLEDGE-BASED SYSTEMS, 2024, 306
  • [2] A Modified Adaptive Sparse-Group LASSO Regularization for Optimal Portfolio Selection
    Sadik, Somaya
    Et-Tolba, Mohamed
    Nsiri, Benayad
    IEEE ACCESS, 2024, 12 : 107337 - 107352
  • [3] Seagull: lasso, group lasso and sparse-group lasso regularization for linear regression models via proximal gradient descent
    Jan Klosa
    Noah Simon
    Pål Olof Westermark
    Volkmar Liebscher
    Dörte Wittenburg
    BMC Bioinformatics, 21
  • [4] Seagull: lasso, group lasso and sparse-group lasso regularization for linear regression models via proximal gradient descent
    Klosa, Jan
    Simon, Noah
    Westermark, Pal Olof
    Liebscher, Volkmar
    Wittenburg, Doerte
    BMC BIOINFORMATICS, 2020, 21 (01)
  • [5] Discovery of Salivary Gland Tumors' Biomarkers via Co-Regularized Sparse-Group Lasso
    Imangaliyev, Sultan
    Matse, Johannes H.
    Bolscher, Jan G. M.
    Brakenhoff, Ruud H.
    Wong, David T. W.
    Bloemena, Elisabeth
    Veerman, Enno C., I
    Levin, Evgeni
    DISCOVERY SCIENCE, DS 2017, 2017, 10558 : 298 - 305
  • [6] Two-Layer Feature Reduction for Sparse-Group Lasso via Decomposition of Convex Sets
    Wang, Jie
    Zhang, Zhanqiu
    Ye, Jieping
    JOURNAL OF MACHINE LEARNING RESEARCH, 2019, 20
  • [7] Two-Layer Feature Reduction for Sparse-Group Lasso via Decomposition of Convex Sets
    Wang, Jie
    Ye, Jieping
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 27 (NIPS 2014), 2014, 27
  • [8] Bi-level variable selection via adaptive sparse group Lasso
    Fang, Kuangnan
    Wang, Xiaoyan
    Zhang, Shengwei
    Zhu, Jianping
    Ma, Shuangge
    JOURNAL OF STATISTICAL COMPUTATION AND SIMULATION, 2015, 85 (13) : 2750 - 2760
  • [9] Simultaneous variable and factor selection via sparse group lasso in factor analysis
    Dang, Yuanchu
    Wang, Qing
    JOURNAL OF STATISTICAL COMPUTATION AND SIMULATION, 2019, 89 (14) : 2744 - 2764
  • [10] Grouped Gene Selection of Cancer via Adaptive Sparse Group Lasso Based on Conditional Mutual Information
    Li, Juntao
    Dong, Wenpeng
    Meng, Deyuan
    IEEE-ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS, 2018, 15 (06) : 2028 - 2038