Multi-modal learning for inpatient length of stay prediction

被引:7
作者
Chen, Junde [1 ]
Wen, Yuxin [1 ]
Pokojovy, Michael [2 ]
Tseng, Tzu-Liang [3 ]
McCaffrey, Peter [4 ]
Vo, Alexander [4 ]
Walser, Eric [4 ]
Moen, Scott [4 ]
机构
[1] Chapman Univ, Dale E & Sarah Ann Fowler Sch Engn, Orange, CA 92866 USA
[2] Old Dominion Univ, Dept Math & Stat, Norfolk, VA 23529 USA
[3] Univ Texas El Paso, Dept Ind Mfg & Syst Engn, El Paso, TX 79968 USA
[4] Univ Texas Med Branch, Galveston, TX 77550 USA
基金
美国国家科学基金会;
关键词
Chest X-ray images; Data -fusion model; Length of stay prediction; Multi -modal learning; HOSPITAL MORTALITY;
D O I
10.1016/j.compbiomed.2024.108121
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
Predicting inpatient length of stay (LoS) is important for hospitals aiming to improve service efficiency and enhance management capabilities. Patient medical records are strongly associated with LoS. However, due to diverse modalities, heterogeneity, and complexity of data, it becomes challenging to effectively leverage these heterogeneous data to put forth a predictive model that can accurately predict LoS. To address the challenge, this study aims to establish a novel data-fusion model, termed as DF-Mdl, to integrate heterogeneous clinical data for predicting the LoS of inpatients between hospital discharge and admission. Multi-modal data such as demographic data, clinical notes, laboratory test results, and medical images are utilized in our proposed methodology with individual "basic" sub-models separately applied to each different data modality. Specifically, a convolutional neural network (CNN) model, which we termed CRXMDL, is designed for chest X-ray (CXR) image data, two long short-term memory networks are used to extract features from long text data, and a novel attention-embedded 1D convolutional neural network is developed to extract useful information from numerical data. Finally, these basic models are integrated to form a new data-fusion model (DF-Mdl) for inpatient LoS prediction. The proposed method attains the best R2 and EVAR values of 0.6039 and 0.6042 among competitors for the LoS prediction on the Medical Information Mart for Intensive Care (MIMIC)-IV test dataset. Empirical evidence suggests better performance compared with other state-of-the-art (SOTA) methods, which demonstrates the effectiveness and feasibility of the proposed approach.
引用
收藏
页数:11
相关论文
共 50 条
  • [41] Popularity Prediction of Social Media based on Multi-Modal Feature Mining
    Hsu, Chih-Chung
    Kang, Li-Wei
    Lee, Chia-Yen
    Lee, Jun-Yi
    Zhang, Zhong-Xuan
    Wu, Shao-Min
    PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19), 2019, : 2687 - 2691
  • [42] Leveraging Foundation Models for Multi-modal Federated Learning with Incomplete Modality
    Che, Liwei
    Wang, Jiaqi
    Liu, Xinyue
    Ma, Fenglong
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES-APPLIED DATA SCIENCE TRACK, PT IX, ECML PKDD 2024, 2024, 14949 : 401 - 417
  • [43] A scalable multi-modal learning fruit detection algorithm for dynamic environments
    Mao, Liang
    Guo, Zihao
    Liu, Mingzhe
    Li, Yue
    Wang, Linlin
    Li, Jie
    FRONTIERS IN NEUROROBOTICS, 2025, 18
  • [44] ConOffense: Multi-modal multitask Contrastive learning for offensive content identification
    Shome, Debaditya
    Kar, T.
    2021 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2021, : 4524 - 4529
  • [45] Cross-Modal Diversity-Based Active Learning for Multi-Modal Emotion Estimation
    Xu, Yifan
    Meng, Lubin
    Peng, Ruimin
    Yin, Yingjie
    Ding, Jingting
    Li, Liang
    Wu, Dongrui
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [46] Multi-Modal Low-Data-Based Learning for Video Classification
    Citak, Erol
    Karsligil, Mine Elif
    APPLIED SCIENCES-BASEL, 2024, 14 (10):
  • [47] A Multi-Modal Vertical Federated Learning Framework Based on Homomorphic Encryption
    Gong, Maoguo
    Zhang, Yuanqiao
    Gao, Yuan
    Qin, A. K.
    Wu, Yue
    Wang, Shanfeng
    Zhang, Yihong
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 1826 - 1839
  • [48] Image caption of space science experiment based on multi-modal learning
    Li P.-Z.
    Wan X.
    Li S.-Y.
    Guangxue Jingmi Gongcheng/Optics and Precision Engineering, 2021, 29 (12): : 2944 - 2955
  • [49] Korean Tourist Spot Multi-Modal Dataset for Deep Learning Applications
    Jeong, Changhoon
    Jang, Sung-Eun
    Na, Sanghyuck
    Kim, Juntae
    DATA, 2019, 4 (04)
  • [50] Reliable multi-modal prototypical contrastive learning for difficult airway assessment
    Li, Xiaofan
    Peng, Bo
    Yao, Yuan
    Zhang, Guangchao
    Xie, Zhuyang
    Saleem, Muhammad Usman
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 273