Multi-task Envisioning Transformer-based Autoencoder for Corporate Credit Rating Migration Early Prediction

被引:0
|
作者
Yue, Han [1 ]
Xia, Steve [2 ]
Liu, Hongfu [1 ]
机构
[1] Brandeis Univ, Waltham, MA 02254 USA
[2] Guardian Life Insurance, New York, NY USA
关键词
Rating Migration; Fin-tech; Machine Learning; VOLATILITY; FINTECH;
D O I
10.1145/3534678.3539098
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Corporate credit ratings issued by third-party rating agencies are quantified assessments of a company's creditworthiness. Credit Ratings highly correlate to the likelihood of a company defaulting on its debt obligations. These ratings play critical roles in investment decision-making as one of the key risk factors. They are also central to the regulatory framework such as BASEL II in calculating necessary capital for financial institutions. Being able to predict rating changes will greatly benefit both investors and regulators alike. In this paper, we consider the corporate credit rating migration early prediction problem, which predicts the credit rating of an issuer will be upgraded, unchanged, or downgraded after 12 months based on its latest financial reporting information at the time. We investigate the effectiveness of different standard machine learning algorithms and conclude these models deliver inferior performance. As part of our contribution, we propose a new Multi-task Envisioning Transformer-based Autoencoder (META) model to tackle this challenging problem. META consists of Positional Encoding, Transformer-based Autoencoder, and Multi-task Prediction to learn effective representations for both migration prediction and rating prediction. This enables META to better explore the historical data in the training stage for one-year later prediction. Experimental results show that META outperforms all baseline models.
引用
收藏
页码:4452 / 4460
页数:9
相关论文
共 50 条
  • [21] Improving Transformer-based Speech Recognition with Unsupervised Pre-training and Multi-task Semantic Knowledge Learning
    Li, Song
    Li, Lin
    Hong, Qingyang
    Liu, Lingling
    INTERSPEECH 2020, 2020, : 5006 - 5010
  • [22] A Transformer-Embedded Multi-Task Model for Dose Distribution Prediction
    Wen, Lu
    Xiao, Jianghong
    Tan, Shuai
    Wu, Xi
    Zhou, Jiliu
    Peng, Xingchen
    Wang, Yan
    INTERNATIONAL JOURNAL OF NEURAL SYSTEMS, 2023, 33 (08)
  • [23] A transformer-based multi-task deep learning model for simultaneous infiltrated brain area identification and segmentation of gliomas
    Yin Li
    Kaiyi Zheng
    Shuang Li
    Yongju Yi
    Min Li
    Yufan Ren
    Congyue Guo
    Liming Zhong
    Wei Yang
    Xinming Li
    Lin Yao
    Cancer Imaging, 23
  • [24] TransMEF: A Transformer-Based Multi-Exposure Image Fusion Framework Using Self-Supervised Multi-Task Learning
    Qu, Linhao
    Liu, Shaolei
    Wang, Manning
    Song, Zhijian
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 2126 - 2134
  • [25] Go Figure! Multi-task transformer-based architecture for metaphor detection using idioms: ETS team in 2020 metaphor shared task
    Chen, Xianyang
    Leong, Chee Wee
    Flor, Michael
    Klebanov, Beata Beigman
    FIGURATIVE LANGUAGE PROCESSING, 2020, : 235 - 243
  • [26] A transformer-based multi-task deep learning model for simultaneous T-stage identification and segmentation of nasopharyngeal carcinoma
    Yang, Kaifan
    Dong, Xiuyu
    Tang, Fan
    Ye, Feng
    Chen, Bei
    Liang, Shujun
    Zhang, Yu
    Xu, Yikai
    FRONTIERS IN ONCOLOGY, 2024, 14
  • [27] MTLFormer: Multi-Task Learning Guided Transformer Network for Business Process Prediction
    Wang, Jiaojiao
    Huang, Jiawei
    Ma, Xiaoyu
    Li, Zhongjin
    Wang, Yaqi
    Yu, Dingguo
    IEEE ACCESS, 2023, 11 : 76722 - 76738
  • [28] Personality prediction via multi-task transformer architecture combined with image aesthetics
    Bajestani, Shahryar Salmani
    Khalilzadeh, Mohammad Mahdi
    Azarnoosh, Mahdi
    Kobravi, Hamid Reza
    DIGITAL SCHOLARSHIP IN THE HUMANITIES, 2024, 39 (03) : 836 - 848
  • [29] A Multi-task LSTM Framework for Improved Early Sepsis Prediction
    Tsiligkaridis, Theodoros
    Sloboda, Jennifer
    ARTIFICIAL INTELLIGENCE IN MEDICINE (AIME 2020), 2020, : 49 - 58
  • [30] Water Quality Prediction Based on Multi-Task Learning
    Wu, Huan
    Cheng, Shuiping
    Xin, Kunlun
    Ma, Nian
    Chen, Jie
    Tao, Liang
    Gao, Min
    INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH, 2022, 19 (15)