Deep neural networks with L1 and L2 regularization for high dimensional corporate credit risk prediction

被引:29
|
作者
Yang, Mei [1 ]
Lim, Ming K. [4 ]
Qu, Yingchi [1 ]
Li, Xingzhi [3 ]
Ni, Du [2 ]
机构
[1] Chongqing Univ, Sch Econ & Business Adm, Chongqing 400030, Peoples R China
[2] Nanjing Univ Posts & Telecommun, Sch Management, Jiangsu 210003, Peoples R China
[3] Chongqing Jiaotong Univ, Sch Econ & Management, Chongqing 400074, Peoples R China
[4] Univ Glasgow, Adam Smith Business Sch, Glasgow G14 8QQ, Scotland
关键词
High dimensional data; Credit risk; Deep neural network; Prediction; L1; regularization; SUPPORT VECTOR MACHINES; FEATURE-SELECTION; DECISION-MAKING; MODELS; CLASSIFICATION; SVM;
D O I
10.1016/j.eswa.2022.118873
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Accurate credit risk prediction can help companies avoid bankruptcies and make adjustments ahead of time. There is a tendency in corporate credit risk prediction that more and more features are considered in the pre-diction system. However, this often brings redundant and irrelevant information which greatly impairs the performance of prediction algorithms. Therefore, this study proposes an HDNN algorithm that is an improved deep neural network (DNN) algorithm and can be used for high dimensional prediction of corporate credit risk. We firstly theoretically proved that there was no regularization effect when L1 regularization was added to the batch normalization layer of the DNN, which was a hidden rule in the industrial implementation but never been proved. In addition, we proved that adding L2 constraints on a single L1 regularization can solve the issue. Finally, this study analyzed a case study of credit data with supply chain and network data to show the supe-riority of the HDNN algorithm in the scenario of a high dimensional dataset.
引用
收藏
页数:9
相关论文
共 50 条
  • [21] Prediction using step-wise L1, L2 regularization and feature selection for small data sets with large number of features
    Demir-Kavuk, Ozgur
    Kamada, Mayumi
    Akutsu, Tatsuya
    Knapp, Ernst-Walter
    BMC BIOINFORMATICS, 2011, 12
  • [22] Investigation on the Effect of L1 an L2 Regularization on Image Features Extracted using Restricted Boltzmann Machine
    Jaiswal, Shruti
    Mehta, Ashish
    Nandi, G. C.
    PROCEEDINGS OF THE 2018 SECOND INTERNATIONAL CONFERENCE ON INTELLIGENT COMPUTING AND CONTROL SYSTEMS (ICICCS), 2018, : 1548 - 1553
  • [23] An antinoise sparse representation method for robust face recognition via joint l1 and l2 regularization
    Zeng, Shaoning
    Gou, Jianping
    Deng, Lunman
    EXPERT SYSTEMS WITH APPLICATIONS, 2017, 82 : 1 - 9
  • [24] Convergence of batch gradient algorithm with smoothing composition of group l0 and l1/2 regularization for feedforward neural networks
    Ramchoun, Hassan
    Ettaouil, Mohamed
    PROGRESS IN ARTIFICIAL INTELLIGENCE, 2022, 11 (03) : 269 - 278
  • [25] Reading comprehension in L1 and L2: An integrative approach
    Li, Ping
    Clariana, Roy B.
    JOURNAL OF NEUROLINGUISTICS, 2019, 50 : 94 - 105
  • [26] Congestion Control of Wireless Sensor Networks based on L1/2 Regularization
    Jin, Xin
    Yang, Yang
    Ma, Jinrong
    Li, Zhenxing
    PROCEEDINGS OF THE 2019 31ST CHINESE CONTROL AND DECISION CONFERENCE (CCDC 2019), 2019, : 2436 - 2441
  • [27] L1, Lp, L2, and elastic net penalties for regularization of Gaussian component distributions in magnetic resonance relaxometry
    Sabett, Christiana
    Hafftka, Ariel
    Sexton, Kyle
    Spencer, Richard G.
    CONCEPTS IN MAGNETIC RESONANCE PART A, 2017, 46A (02)
  • [28] Oriented total variation l1/2 regularization
    Jiang, Wenfei
    Cui, Hengbin
    Zhang, Fan
    Rong, Yaocheng
    Chen, Zhibo
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2015, 29 : 125 - 137
  • [29] Training Compact DNNs with l1/2 Regularization
    Tang, Anda
    Niu, Lingfeng
    Miao, Jianyu
    Zhang, Peng
    PATTERN RECOGNITION, 2023, 136
  • [30] Sparse kernel logistic regression based on L1/2 regularization
    Xu Chen
    Peng ZhiMing
    Jing WenFeng
    SCIENCE CHINA-INFORMATION SCIENCES, 2013, 56 (04) : 1 - 16