Multi-task Envisioning Transformer-based Autoencoder for Corporate Credit Rating Migration Early Prediction

被引:0
|
作者
Yue, Han [1 ]
Xia, Steve [2 ]
Liu, Hongfu [1 ]
机构
[1] Brandeis Univ, Waltham, MA 02254 USA
[2] Guardian Life Insurance, New York, NY USA
关键词
Rating Migration; Fin-tech; Machine Learning; VOLATILITY; FINTECH;
D O I
10.1145/3534678.3539098
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Corporate credit ratings issued by third-party rating agencies are quantified assessments of a company's creditworthiness. Credit Ratings highly correlate to the likelihood of a company defaulting on its debt obligations. These ratings play critical roles in investment decision-making as one of the key risk factors. They are also central to the regulatory framework such as BASEL II in calculating necessary capital for financial institutions. Being able to predict rating changes will greatly benefit both investors and regulators alike. In this paper, we consider the corporate credit rating migration early prediction problem, which predicts the credit rating of an issuer will be upgraded, unchanged, or downgraded after 12 months based on its latest financial reporting information at the time. We investigate the effectiveness of different standard machine learning algorithms and conclude these models deliver inferior performance. As part of our contribution, we propose a new Multi-task Envisioning Transformer-based Autoencoder (META) model to tackle this challenging problem. META consists of Positional Encoding, Transformer-based Autoencoder, and Multi-task Prediction to learn effective representations for both migration prediction and rating prediction. This enables META to better explore the historical data in the training stage for one-year later prediction. Experimental results show that META outperforms all baseline models.
引用
收藏
页码:4452 / 4460
页数:9
相关论文
共 50 条
  • [1] HTML']HTML: Hierarchical Transformer-based Multi-task Learning for Volatility Prediction
    Yang, Linyi
    Ng, Tin Lok James
    Smyth, Barry
    Dong, Riuhai
    WEB CONFERENCE 2020: PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE (WWW 2020), 2020, : 441 - 451
  • [2] Predicting Outcomes for Cancer Patients with Transformer-Based Multi-task Learning
    Gerrard, Leah
    Peng, Xueping
    Clarke, Allison
    Schlegel, Clement
    Jiang, Jing
    AI 2021: ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, 13151 : 381 - 392
  • [3] PARFormer: Transformer-Based Multi-Task Network for Pedestrian Attribute Recognition
    Fan, Xinwen
    Zhang, Yukang
    Lu, Yang
    Wang, Hanzi
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (01) : 411 - 423
  • [4] Adaptive transformer-based multi-task learning framework for synchronous prediction of substation flooding and outage risks
    Shi, Yu
    Shi, Ying
    Yao, Degui
    Lu, Ming
    Liang, Yun
    ELECTRIC POWER SYSTEMS RESEARCH, 2025, 242
  • [5] Multi-task Active Learning for Pre-trained Transformer-based Models
    Rotman, Guy
    Reichart, Roi
    TRANSACTIONS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, 2022, 10 : 1209 - 1228
  • [6] Transformer-based multi-task learning for classification and segmentation of gastrointestinal tract endoscopic images
    Tang, Suigu
    Yu, Xiaoyuan
    Cheang, Chak Fong
    Liang, Yanyan
    Zhao, Penghui
    Yu, Hon Ho
    Choi, I. Cheong
    COMPUTERS IN BIOLOGY AND MEDICINE, 2023, 157
  • [7] Transformer-Based Multi-task Learning for Queuing Time Aware Next POI Recommendation
    Halder, Sajal
    Lim, Kwan Hui
    Chan, Jeffrey
    Zhang, Xiuzhen
    ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PAKDD 2021, PT II, 2021, 12713 : 510 - 523
  • [8] Prompt Guided Transformer for Multi-Task Dense Prediction
    Lu, Yuxiang
    Sirejiding, Shalayiding
    Ding, Yue
    Wang, Chunlin
    Lu, Hongtao
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 6375 - 6385
  • [9] Abusive Content Detection in Arabic Tweets Using Multi-Task Learning and Transformer-Based Models
    Alrashidi, Bedour
    Jamal, Amani
    Alkhathlan, Ali
    APPLIED SCIENCES-BASEL, 2023, 13 (10):
  • [10] Transformer-based transfer learning and multi-task learning for improving the performance of speech emotion recognition
    Park, Sunchan
    Kim, Hyung Soon
    JOURNAL OF THE ACOUSTICAL SOCIETY OF KOREA, 2021, 40 (05): : 515 - 522