Deep neural networks with L1 and L2 regularization for high dimensional corporate credit risk prediction

被引:29
|
作者
Yang, Mei [1 ]
Lim, Ming K. [4 ]
Qu, Yingchi [1 ]
Li, Xingzhi [3 ]
Ni, Du [2 ]
机构
[1] Chongqing Univ, Sch Econ & Business Adm, Chongqing 400030, Peoples R China
[2] Nanjing Univ Posts & Telecommun, Sch Management, Jiangsu 210003, Peoples R China
[3] Chongqing Jiaotong Univ, Sch Econ & Management, Chongqing 400074, Peoples R China
[4] Univ Glasgow, Adam Smith Business Sch, Glasgow G14 8QQ, Scotland
关键词
High dimensional data; Credit risk; Deep neural network; Prediction; L1; regularization; SUPPORT VECTOR MACHINES; FEATURE-SELECTION; DECISION-MAKING; MODELS; CLASSIFICATION; SVM;
D O I
10.1016/j.eswa.2022.118873
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Accurate credit risk prediction can help companies avoid bankruptcies and make adjustments ahead of time. There is a tendency in corporate credit risk prediction that more and more features are considered in the pre-diction system. However, this often brings redundant and irrelevant information which greatly impairs the performance of prediction algorithms. Therefore, this study proposes an HDNN algorithm that is an improved deep neural network (DNN) algorithm and can be used for high dimensional prediction of corporate credit risk. We firstly theoretically proved that there was no regularization effect when L1 regularization was added to the batch normalization layer of the DNN, which was a hidden rule in the industrial implementation but never been proved. In addition, we proved that adding L2 constraints on a single L1 regularization can solve the issue. Finally, this study analyzed a case study of credit data with supply chain and network data to show the supe-riority of the HDNN algorithm in the scenario of a high dimensional dataset.
引用
收藏
页数:9
相关论文
共 50 条
  • [1] Sentiment Analysis of Tweets by Convolution Neural Network with L1 and L2 Regularization
    Rangra, Abhilasha
    Sehgal, Vivek Kumar
    Shukla, Shailendra
    ADVANCED INFORMATICS FOR COMPUTING RESEARCH, ICAICR 2018, PT I, 2019, 955 : 355 - 365
  • [2] A novel method for financial distress prediction based on sparse neural networks with L1/2 regularization
    Chen, Ying
    Guo, Jifeng
    Huang, Junqin
    Lin, Bin
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2022, 13 (07) : 2089 - 2103
  • [3] l1 Regularization in Two-Layer Neural Networks
    Li, Gen
    Gu, Yuantao
    Ding, Jie
    IEEE SIGNAL PROCESSING LETTERS, 2022, 29 : 135 - 139
  • [4] Enhance the Performance of Deep Neural Networks via L2 Regularization on the Input of Activations
    Shi, Guang
    Zhang, Jiangshe
    Li, Huirong
    Wang, Changpeng
    NEURAL PROCESSING LETTERS, 2019, 50 (01) : 57 - 75
  • [5] Structure Optimization of Neural Networks with L1 Regularization on Gates
    Chang, Qin
    Wang, Junze
    Zhang, Huaqing
    Shi, Lina
    Wang, Jian
    Pal, Nikhil R.
    2018 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI), 2018, : 196 - 203
  • [6] A new Sigma-Pi-Sigma neural network based on L1 and L2 regularization and applications
    Jiao, Jianwei
    Su, Keqin
    AIMS MATHEMATICS, 2024, 9 (03): : 5995 - 6012
  • [7] Towards l1 Regularization for Deep Neural Networks: Model Sparsity Versus Task Difficulty
    Shen, Ta-Chun
    Yang, Chun-Pai
    Yen, Ian En-Hsu
    Lin, Shou-De
    2022 IEEE 9TH INTERNATIONAL CONFERENCE ON DATA SCIENCE AND ADVANCED ANALYTICS (DSAA), 2022, : 126 - 134
  • [8] Image Reconstruction in Ultrasonic Transmission Tomography Using L1/L2 Regularization
    Li, Aoyu
    Liang, Guanghui
    Dong, Feng
    2024 IEEE INTERNATIONAL INSTRUMENTATION AND MEASUREMENT TECHNOLOGY CONFERENCE, I2MTC 2024, 2024,
  • [9] Prediction and integration of semantics during L2 and L1 listening
    Dijkgraaf, Aster
    Hartsuiker, Robert J.
    Duyck, Wouter
    LANGUAGE COGNITION AND NEUROSCIENCE, 2019, 34 (07) : 881 - 900
  • [10] Structured Pruning of Convolutional Neural Networks via L1 Regularization
    Yang, Chen
    Yang, Zhenghong
    Khattak, Abdul Mateen
    Yang, Liu
    Zhang, Wenxin
    Gao, Wanlin
    Wang, Minjuan
    IEEE ACCESS, 2019, 7 : 106385 - 106394