A novel method for financial distress prediction based on sparse neural networks with L1/2 regularization

被引:0
|
作者
Chen, Ying [1 ]
Guo, Jifeng [2 ]
Huang, Junqin [3 ]
Lin, Bin [3 ,4 ]
机构
[1] South China Normal Univ, Int Business Coll, Guangzhou 510631, Peoples R China
[2] South China Univ Technol, Sch Comp Sci & Engn, Guangzhou 510641, Peoples R China
[3] Sun Yat Sen Univ, Sch Business, Guangzhou 510275, Peoples R China
[4] Guangdong Ind Polytech, Guangzhou 510300, Peoples R China
基金
中国国家自然科学基金;
关键词
Financial distress prediction; Features selection; Sparse neural networks; L-1/2; regularization; BANKRUPTCY PREDICTION; DISCRIMINANT-ANALYSIS; FIRMS; SELECTION; RATIOS; REGRESSION; ABILITY; RISK;
D O I
10.1007/s13042-022-01566-y
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Corporate financial distress is related to the interests of the enterprise and stakeholders. Therefore, its accurate prediction is of great significance to avoid huge losses from them. Despite significant effort and progress in this field, the existing prediction methods are either limited by the number of input variables or restricted to those financial predictors. To alleviate those issues, both financial variables and non-financial variables are screened out from the existing accounting and finance theory to use as financial distress predictors. In addition, a novel method for financial distress prediction (FDP) based on sparse neural networks is proposed, namely FDP-SNN, in which the weight of the hidden layer is constrained with L-1/2 regularization to achieve the sparsity, so as to select relevant and important predictors, improving the predicted accuracy. It also provides support for the interpretability of the model. The results show that non-financial variables, such as investor protection and governance structure, play a key role in financial distress prediction than those financial ones, especially when the forecast period grows longer. By comparing those classic models proposed by predominant researchers in accounting and finance, the proposed model outperforms in terms of accuracy, precision, and AUC performance.
引用
收藏
页码:2089 / 2103
页数:15
相关论文
共 50 条
  • [41] Sparse SAR imaging based on L 1/2 regularization
    Zeng JinShan
    Fang Jian
    Xu ZongBen
    SCIENCE CHINA-INFORMATION SCIENCES, 2012, 55 (08) : 1755 - 1775
  • [42] A blind deconvolution method based on L1/L2 regularization priors in the gradient space
    Cai, Ying
    Shi, Yu
    Hua, Xia
    MIPPR 2017: MULTISPECTRAL IMAGE ACQUISITION, PROCESSING, AND ANALYSIS, 2018, 10607
  • [43] Hyperspectral Unmixing Based on Weighted L1/2 Regularization
    Li, Yan
    Li, Kai
    2016 3RD INTERNATIONAL CONFERENCE ON INFORMATION SCIENCE AND CONTROL ENGINEERING (ICISCE), 2016, : 400 - 404
  • [44] DropWeak: A novel regularization method of neural networks
    El Korchi, Anas
    Ghanou, Youssf
    PROCEEDINGS OF THE FIRST INTERNATIONAL CONFERENCE ON INTELLIGENT COMPUTING IN DATA SCIENCES (ICDS2017), 2018, 127 : 102 - 108
  • [45] Application of L1 - L2 Regularization in Sparse-View Photoacoustic Imaging Reconstruction
    Wang, Mengyu
    Dai, Shuo
    Wang, Xin
    Liu, Xueyan
    IEEE PHOTONICS JOURNAL, 2024, 16 (03): : 1 - 8
  • [46] Towards l1 Regularization for Deep Neural Networks: Model Sparsity Versus Task Difficulty
    Shen, Ta-Chun
    Yang, Chun-Pai
    Yen, Ian En-Hsu
    Lin, Shou-De
    2022 IEEE 9TH INTERNATIONAL CONFERENCE ON DATA SCIENCE AND ADVANCED ANALYTICS (DSAA), 2022, : 126 - 134
  • [47] Sentiment Analysis of Tweets by Convolution Neural Network with L1 and L2 Regularization
    Rangra, Abhilasha
    Sehgal, Vivek Kumar
    Shukla, Shailendra
    ADVANCED INFORMATICS FOR COMPUTING RESEARCH, ICAICR 2018, PT I, 2019, 955 : 355 - 365
  • [48] A new Sigma-Pi-Sigma neural network based on L1 and L2 regularization and applications
    Jiao, Jianwei
    Su, Keqin
    AIMS MATHEMATICS, 2024, 9 (03): : 5995 - 6012
  • [49] Gene Selection in Cancer Classification Using Sparse Logistic Regression with L1/2 Regularization
    Wu, Shengbing
    Jiang, Hongkun
    Shen, Haiwei
    Yang, Ziyi
    APPLIED SCIENCES-BASEL, 2018, 8 (09):
  • [50] Convergence of batch gradient algorithm with smoothing composition of group l0 and l1/2 regularization for feedforward neural networks
    Ramchoun, Hassan
    Ettaouil, Mohamed
    PROGRESS IN ARTIFICIAL INTELLIGENCE, 2022, 11 (03) : 269 - 278