Multi-modal Predictive Models of Diabetes Progression

被引:6
|
作者
Ramazi, Ramin [1 ]
Perndorfer, Christine [1 ]
Soriano, Emily [1 ]
Laurenceau, Jean-Philippe [1 ]
Beheshti, Rahmatollah [1 ]
机构
[1] Univ Delaware, Newark, DE 19716 USA
关键词
Type; 2; diabetes; Continuous glucose monitoring; Activity trackers; Wearable medical devices; Recurrent neural networks; TYPE-1;
D O I
10.1145/3307339.3342177
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
With the increasing availability of wearable devices, continuous monitoring of individuals' physiological and behavioral patterns has become significantly more accessible. Access to these continuous patterns about individuals' statuses offers an unprecedented opportunity for studying complex diseases and health conditions such as type 2 diabetes (T2D). T2D is a widely common chronic disease that its roots and progression patterns are not fully understood. Predicting the progression of T2D can inform timely and more effective interventions to prevent or manage the disease. In this study, we have used a dataset related to 63 patients with T2D that includes the data from two different types of wearable devices worn by the patients: continuous glucose monitoring (CGM) devices and activity trackers (ActiGraphs). Using this dataset, we created a model for predicting the levels of four major biomarkers related to T2D after a one-year period. We developed a wide and deep neural network and used the data from the demographic information, lab tests, and wearable sensors to create the model. The deep part of our method was developed based on the long short-term memory (LSTM) structure to process the time-series dataset collected by the wearables. In predicting the patterns of the four biomarkers, we have obtained a root mean square error of +/- 1.67% for HBA1c, +/- 6.22 mg/dl for HDL cholesterol, +/- 10.46 mg/dl for LDL cholesterol, and +/- 18.38 mg/dl for Triglyceride. Compared to existing models for studying T2D, our model offers a more comprehensive tool for combining a large variety of factors that contribute to the disease.
引用
收藏
页码:253 / 258
页数:6
相关论文
共 50 条
  • [21] Multi-Modal 2020: Multi-Modal Argumentation 30 Years Later
    Gilbert, Michael A.
    INFORMAL LOGIC, 2022, 42 (03): : 487 - 506
  • [22] Multi-task Multi-modal Models for Collective Anomaly Detection
    Ide, Tsuyoshi
    Phan, Dzung T.
    Kalagnanam, Jayant
    2017 17TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2017, : 177 - 186
  • [23] Multi-modal Perception
    Kondo, T.
    Denshi Joho Tsushin Gakkai Shi/Journal of the Institute of Electronics, Information and Communications Engineers, 78 (12):
  • [24] Multi-modal mapping
    Yates, Darran
    NATURE REVIEWS NEUROSCIENCE, 2016, 17 (09) : 536 - 536
  • [25] Multi-modal perception
    BT Technol J, 1 (35-46):
  • [26] Multi-modal Fusion
    Liu, Huaping
    Hussain, Amir
    Wang, Shuliang
    INFORMATION SCIENCES, 2018, 432 : 462 - 462
  • [27] Model Predictive Control in Partially Observable Multi-Modal Discrete Environments
    Rosolia, Ugo
    Guastella, Dario C.
    Muscato, Giovanni
    Borrelli, Francesco
    IEEE CONTROL SYSTEMS LETTERS, 2023, 7 : 2161 - 2166
  • [28] Multi-modal Predictive Model for MACE Risk Estimation in Patients with Migraine
    Tariq, Amara
    Dumitrascu, Oana
    Luo, Man
    Dumkrieger, Gina
    Schwedt, Todd J.
    Chong, Catherine
    Banerjee, Imon
    2024 IEEE 12TH INTERNATIONAL CONFERENCE ON HEALTHCARE INFORMATICS, ICHI 2024, 2024, : 684 - 687
  • [29] Multi-modal perception
    Hollier, MP
    Rimell, AN
    Hands, DS
    Voelcker, RM
    BT TECHNOLOGY JOURNAL, 1999, 17 (01) : 35 - 46
  • [30] Multi-modal prognostic biomarkers for Parkinson's disease progression and severity
    Rolland, A. S.
    Dutheil, M.
    Simonin, O.
    Viard, R.
    Huin, V.
    Kyheng, M.
    Moreau, C.
    Thobois, S.
    Eusebio, A.
    Hainque, E.
    Benatru, I.
    Maltete, D.
    Giordana, C.
    Tir, M.
    Hubsch, C.
    Jarraya, B.
    Durif, F.
    Brefel-Courbon, C.
    Rascol, O.
    Corvol, J. C.
    Garcon, G.
    Devos, D.
    MOVEMENT DISORDERS, 2022, 37 : S594 - S595