Multi-view representation learning with Kolmogorov-Smirnov to predict default based on imbalanced and complex dataset

被引:15
|
作者
Tan, Yandan [1 ]
Zhao, Guangcai [2 ]
机构
[1] Fudan Univ, Sch Comp Sci, 2005 Songhu Rd, Shanghai 200433, Peoples R China
[2] Shandong Univ, Sch Control Sci & Engn, Jinan 250061, Peoples R China
关键词
Multi-view representation learning; Kolmogorov-Smirnov (KS); Imbalanced and complex dataset; Default prediction; P2P lending; CREDIT RISK-ASSESSMENT; MODEL; PEER; NETWORKS; TREE;
D O I
10.1016/j.ins.2022.03.022
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Existing solutions focus on improving overall accuracy for imbalanced and complex loan datasets, resulting in a lower precise recall for default samples. To embrace these challenges, based on peer-to-peer loan application information, we proposed a multi-view representation learning with Kolmogorov-Smirnov (KS) to effectively organize these complex data and predict default. Firstly, the features were automatically represented as multi views based on their discreteness and correlation difference. Then, a corresponding multi-view deep neural network (MV-DNN) was developed to obtain knowledge in a multi-view way. Here, we firstly designed different view learning layers to obtain knowledge in corresponding views. Subsequently, to interact with the knowledge in different views, an information fusion layer was developed to fuse the acquired information. To face the challenge from imbalanced data distribution, the KS was set as evaluation metric to assist in training MV-DNN to improve the distinguishing ability for two classes of samples. The experimental results show compared with the MV-DNNs based on random and k means multi-view strategies, and other advanced models, our method could provide optimal comprehensive performance and the most stable multi-view organizing results. Furthermore, we also verified the KS is the key component to assist the model in dealing with the imbalanced dataset.(c) 2022 Elsevier Inc. All rights reserved.
引用
收藏
页码:380 / 394
页数:15
相关论文
共 50 条
  • [1] A Survey of Multi-View Representation Learning
    Li, Yingming
    Yang, Ming
    Zhang, Zhongfei
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2019, 31 (10) : 1863 - 1883
  • [2] Multi-view representation learning for multi-view action recognition
    Hao, Tong
    Wu, Dan
    Wang, Qian
    Sun, Jin-Sheng
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2017, 48 : 453 - 460
  • [3] Comprehensive Multi-view Representation Learning
    Zheng, Qinghai
    Zhu, Jihua
    Li, Zhongyu
    Tian, Zhiqiang
    Li, Chen
    INFORMATION FUSION, 2023, 89 : 198 - 209
  • [4] Unsupervised representation learning based on the deep multi-view ensemble learning
    Koohzadi, Maryam
    Charkari, Nasrollah Moghadam
    Ghaderi, Foad
    APPLIED INTELLIGENCE, 2020, 50 (02) : 562 - 581
  • [5] Tensorized Multi-view Subspace Representation Learning
    Changqing Zhang
    Huazhu Fu
    Jing Wang
    Wen Li
    Xiaochun Cao
    Qinghua Hu
    International Journal of Computer Vision, 2020, 128 : 2344 - 2361
  • [6] Collaborative Unsupervised Multi-View Representation Learning
    Zheng, Qinghai
    Zhu, Jihua
    Li, Zhongyu
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (07) : 4202 - 4210
  • [7] Tensorized Multi-view Subspace Representation Learning
    Zhang, Changqing
    Fu, Huazhu
    Wang, Jing
    Li, Wen
    Cao, Xiaochun
    Hu, Qinghua
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2020, 128 (8-9) : 2344 - 2361
  • [8] Semantically consistent multi-view representation learning
    Zhou, Yiyang
    Zheng, Qinghai
    Bai, Shunshun
    Zhu, Jihua
    KNOWLEDGE-BASED SYSTEMS, 2023, 278
  • [9] Instance-wise multi-view representation learning
    Li, Dan
    Wang, Haibao
    Wang, Yufeng
    Wang, Shengpei
    INFORMATION FUSION, 2023, 91 : 612 - 622
  • [10] sEMG-Based Multi-view Feature-Constrained Representation Learning
    Yan, Shuo
    Dai, Hongjun
    Wang, Ruomei
    Zhang, Long
    Wang, Guan
    KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT, PT I, KSEM 2024, 2024, 14884 : 322 - 333