Neural lasso: a unifying approach of lasso and neural networks

被引:0
作者
Curbelo, Ernesto [1 ]
Delgado-Gomez, David [1 ]
Carreras, Danae [1 ]
机构
[1] Univ Carlos III Madrid, Dept Stat, Ave Univ 30, Madrid 28911, Spain
关键词
Neural networks; Lasso; Cross-validation; Feature selection; FRAMEWORK; SUICIDE; SCALE; TREES;
D O I
10.1007/s41060-024-00546-5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In recent years, there has been a growing interest in establishing bridges between statistics and neural networks. This article focuses on the adaptation of the widely used lasso algorithm within the context of neural networks. To accomplish this, the network configuration is first designed. After that, in order to estimate the network weights, three optimization algorithms are considered. The first one, called standard neural lasso, employs the conventional procedure for training neural networks. The second optimization algorithm, termed restricted neural lasso, mimics traditional lasso to establish a connection between statistics and machine learning. Finally, a third optimization algorithm, called voting neural lasso was developed. Voting neural lasso offers a novel way of estimating weights by considers the significance of variables across the cross-validation scenarios. Results showed that the conventional approach of training neural networks resulted in a lower performance when the validation set is not sufficiently representative. It was also observed that restricted neural lasso and the traditional lasso obtained equivalent results, which shows the convergence of the neural technique with the statistical one. Finally, the developed voting neural lasso algorithm outperformed the traditional lasso. These results were obtained across diverse training sets, encompassing observations ranging from as few as 47 to as many as 4000, with the number of predictors varying from 9 to 200.
引用
收藏
页数:11
相关论文
共 33 条
  • [11] Hastie T., 2015, STAT LEARNING SPARSI
  • [12] Hopkins M, 1999, UCI Machine Learning Repository, DOI [10.24432/C53G6X, DOI 10.24432/C53G6X]
  • [13] Kingma D. P., ADAM METHOD STOCHAST
  • [14] Laria JC., 2021, Front. Comput. Neurosci, V15, P54, DOI DOI 10.3389/FNCOM.2021.674028
  • [15] Linden W.J., 2000, COMPUTERIZED ADAPTIV, DOI DOI 10.1007/0-306-47531-6
  • [16] Linden W. J. D., 1996, HDB MODERN ITEM RESP
  • [17] NeuralLasso: Neural Networks Meet Lasso in Genomic Prediction
    Mathew, Boby
    Hauptmann, Andreas
    Leon, Jens
    Sillanpaeae, Mikko J.
    [J]. FRONTIERS IN PLANT SCIENCE, 2022, 13
  • [18] Relaxed lasso
    Meinshausen, Nicolai
    [J]. COMPUTATIONAL STATISTICS & DATA ANALYSIS, 2007, 52 (01) : 374 - 393
  • [19] Towards a mathematical framework to inform neural network modelling via polynomial regression
    Morala, Pablo
    Cifuentes, Jenny Alexandra
    Lillo, Rosa E.
    Ucar, Inaki
    [J]. NEURAL NETWORKS, 2021, 142 : 57 - 72
  • [20] Nash W.J., 1994, The Population Biology of Abalone (Haliotis species) in Tasmania. I. Blacklip Abalone (H. rubra) from the North Coast and Islands of Bass Strait, P411