M3T-LM: A multi-modal multi-task learning model for jointly predicting patient length of stay and mortality

被引:0
作者
Chen, Junde [1 ]
Li, Qing [2 ]
Liu, Feng [3 ]
Wen, Yuxin [1 ]
机构
[1] Dale E. and Sarah Ann Fowler School of Engineering, Chapman University, Orange, 92866, CA
[2] Department of Industrial and Manufacturing Systems Engineering, Iowa State University, Ames, 50011, IA
[3] School of Systems and Enterprises, Stevens Institute of Technology, Hoboken, 07030, NJ
基金
美国国家科学基金会;
关键词
Data-fusion model; Deep learning; Length of stay prediction; Multi-task learning;
D O I
10.1016/j.compbiomed.2024.109237
中图分类号
学科分类号
摘要
Ensuring accurate predictions of inpatient length of stay (LoS) and mortality rates is essential for enhancing hospital service efficiency, particularly in light of the constraints posed by limited healthcare resources. Integrative analysis of heterogeneous clinic record data from different sources can hold great promise for improving the prognosis and diagnosis level of LoS and mortality. Currently, most existing studies solely focus on single data modality or tend to single-task learning, i.e., training LoS and mortality tasks separately. This limits the utilization of available multi-modal data and prevents the sharing of feature representations that could capture correlations between different tasks, ultimately hindering the model's performance. To address the challenge, this study proposes a novel Multi-Modal Multi-Task learning model, termed as M3T-LM, to integrate clinic records to predict inpatients’ LoS and mortality simultaneously. The M3T-LM framework incorporates multiple data modalities by constructing sub-models tailored to each modality. Specifically, a novel attention-embedded one-dimensional (1D) convolutional neural network (CNN) is designed to handle numerical data. For clinical notes, they are converted into sequence data, and then two long short-term memory (LSTM) networks are exploited to model on textual sequence data. A two-dimensional (2D) CNN architecture, noted as CRXMDL, is designed to extract high-level features from chest X-ray (CXR) images. Subsequently, multiple sub-models are integrated to formulate the M3T-LM to capture the correlations between patient LoS and modality prediction tasks. The efficiency of the proposed method is validated on the MIMIC-IV dataset. The proposed method attained a test MAE of 5.54 for LoS prediction and a test F1 of 0.876 for mortality prediction. The experimental results demonstrate that our approach outperforms state-of-the-art (SOTA) methods in tackling mixed regression and classification tasks. © 2024 Elsevier Ltd
引用
收藏
相关论文
共 50 条
[21]   A novel multi-task learning network for skin lesion classification based on multi-modal clues and label-level fusion [J].
Lin Q. ;
Guo X. ;
Feng B. ;
Guo J. ;
Ni S. ;
Dong H. .
Computers in Biology and Medicine, 2024, 175
[22]   MM-DAG: Multi-task DAG Learning for Multi-modal Data - with Application for Traffic Congestion Analysis [J].
Lan, Tian ;
Li, Ziyue ;
Li, Zhishuai ;
Bai, Lei ;
Li, Man ;
Tsung, Fugee ;
Ketter, Wolfgang ;
Zhao, Rui ;
Zhang, Chen .
PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023, 2023, :1188-1199
[23]   Multi-modal fusion and multi-task deep learning for monitoring the growth of film-mulched winter wheat [J].
Cheng, Zhikai ;
Gu, Xiaobo ;
Du, Yadan ;
Wei, Chunyu ;
Xu, Yang ;
Zhou, Zhihui ;
Li, Wenlong ;
Cai, Wenjing .
PRECISION AGRICULTURE, 2024, 25 (04) :1933-1957
[24]   M3GAT: A Multi-modal, Multi-task Interactive Graph Attention Network for Conversational Sentiment Analysis and Emotion Recognition [J].
Zhang, Yazhou ;
Jia, Ao ;
Wang, Bo ;
Zhang, Peng ;
Zhao, Dongming ;
Li, Pu ;
Hou, Yuexian ;
Jin, Xiaojia ;
Song, Dawei ;
Qin, Jing .
ACM TRANSACTIONS ON INFORMATION SYSTEMS, 2024, 42 (01)
[25]   Associating Multi-Modal Brain Imaging Phenotypes and Genetic Risk Factors via a Dirty Multi-Task Learning Method [J].
Du, Lei ;
Liu, Fang ;
Liu, Kefei ;
Yao, Xiaohui ;
Risacher, Shannon L. ;
Han, Junwei ;
Saykin, Andrew J. ;
Shen, Li .
IEEE TRANSACTIONS ON MEDICAL IMAGING, 2020, 39 (11) :3416-3428
[26]   Prediction of prognostic risk factors in hepatocellular carcinoma with transarterial chemoembolization using multi-modal multi-task deep learning [J].
Liu, Qiu-Ping ;
Xu, Xun ;
Zhu, Fei-Peng ;
Zhang, Yu-Dong ;
Liu, Xi-Sheng .
ECLINICALMEDICINE, 2020, 23
[27]   A Multi-task Learning Approach for Predicting Spatio-temporal Patient Variables [J].
Madhobi, Kaniz Fatema ;
Lofgren, Eric ;
Kalyanaraman, Ananth .
15TH ACM CONFERENCE ON BIOINFORMATICS, COMPUTATIONAL BIOLOGY, AND HEALTH INFORMATICS, ACM-BCB 2024, 2024,
[28]   3MT-Net: A Multi-Modal Multi-Task Model for Breast Cancer and Pathological Subtype Classification Based on a Multicenter Study [J].
Duan, Yaofei ;
Pang, Patrick Cheong-Iao ;
He, Ping ;
Wang, Rongsheng ;
Sun, Yue ;
Liu, Chuntao ;
Zhang, Xiaorong ;
Yuan, Xirong ;
Song, Pengjie ;
Lam, Chan-Tong ;
Cui, Ligang ;
Tan, Tao .
IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2025, 29 (07) :4680-4691
[29]   A collaborative multi-task model for immunohistochemical molecular sub-types of multi-modal breast cancer MRI images [J].
Xiang, Haozhen ;
Xiong, Yuqi ;
Shen, Yingwei ;
Li, Jiaxin ;
Liu, Deshan .
BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2025, 100
[30]   HAMMF: Hierarchical attention-based multi-task and multi-modal fusion model for computer-aided diagnosis of Alzheimer's disease [J].
Liu X. ;
Li W. ;
Miao S. ;
Liu F. ;
Han K. ;
Bezabih T.T. .
Computers in Biology and Medicine, 2024, 176