REMONI: An Autonomous System Integrating Wearables and Multimodal Large Language Models for Enhanced Remote Health Monitoring

被引:2
作者
Ho, Thanh Cong [1 ]
Kharrat, Farah [1 ]
Abid, Ahderrazek [1 ]
Karray, Fakhri [1 ,2 ]
Koubaa, Anis [3 ]
机构
[1] Mohamed Bin Zayed Univ Artificial Intelligence, Abu Dhabi, U Arab Emirates
[2] Univ Waterloo, Dept Elect & Comp Engn, Waterloo, ON N2L 3G1, Canada
[3] Prince Sultan Univ, Coll Comp & Informat Sci, Riyadh, Saudi Arabia
来源
2024 IEEE INTERNATIONAL SYMPOSIUM ON MEDICAL MEASUREMENTS AND APPLICATIONS, MEMEA 2024 | 2024年
关键词
Remote Health Monitoring; Wearable Technology; Multimodal Large Language Models; Healthcare;
D O I
10.1109/MEMEA60663.2024.10596778
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
With the widespread adoption of wearable devices in our daily lives, the demand and appeal for remote patient monitoring have significantly increased. Most research in this field has concentrated on collecting sensor data, visualizing it, and analyzing it to detect anomalies in specific diseases such as diabetes, heart disease and depression. However, this domain has a notable gap in the aspect of human-machine interaction. This paper proposes REMONI, an autonomous REmote health MONItoring system that integrates multimodal large language models (MLLMs), the Internet of Things (IoT), and wearable devices. The system automatically and continuously collects vital signs, accelerometer data from a special wearable (such as a smartwatch), and visual data in patient video clips collected from cameras. This data is processed by an anomaly detection module, which includes a fall detection model and algorithms to identify and alert caregivers of the patient's emergency conditions. A distinctive feature of our proposed system is the natural language processing component, developed with MLLMs capable of detecting and recognizing a patient's activity and emotion while responding to healthcare worker's inquiries. Additionally, prompt engineering is employed to integrate all patient information seamlessly. As a result, doctors and nurses can access real-time vital signs and the patient's current state and mood by interacting with an intelligent agent through a user-friendly web application. Our experiments demonstrate that our system is implementable and scalable for real-life scenarios, potentially reducing the workload of medical professionals and healthcare costs. A full-fledged prototype illustrating the functionalities of the system has been developed and being tested to demonstrate the robustness of its various capabilities.
引用
收藏
页数:6
相关论文
共 23 条
[1]  
Ammar S., 2023, 2023 INT C INT MET T, P1
[2]  
Chen Z, 2023, Meditron-70b: Scaling medical pre
[3]  
Han TY, 2023, Arxiv, DOI arXiv:2304.08247
[4]  
Health Encyclopedia, Vital signs
[5]  
Kharrat F, 2023, 2023 IEEE INT S MED, P1
[6]   Detection of important features and comparison of datasets for fall detection based on wrist-wearable devices [J].
Kim, Jeong-Kyun ;
Lee, Kangbok ;
Hong, Sang Gi .
EXPERT SYSTEMS WITH APPLICATIONS, 2023, 234
[7]   ChatDoctor: A Medical Chat Model Fine-Tuned on a Large Language Model Meta-AI (LLaMA) Using Medical Domain Knowledge [J].
Li, Yunxiang ;
Li, Zihan ;
Zhang, Kai ;
Dan, Ruilong ;
Jiang, Steve ;
Zhang, You .
CUREUS JOURNAL OF MEDICAL SCIENCE, 2023, 15 (06)
[8]  
Liu C.-P., 2023, Deep learning-based fall detection algorithm using ensemble model of coarse-fine cnn and gru networks
[9]   Deep-Learning-Based Signal Enhancement of Low-Resolution Accelerometer for Fall Detection Systems [J].
Liu, Kai-Chun ;
Hung, Kuo-Hsuan ;
Hsieh, Chia-Yeh ;
Huang, Hsiang-Yun ;
Chan, Chia-Tai ;
Tsao, Yu .
IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2022, 14 (03) :1270-1281
[10]  
Lu Q., 2022, P FINDINGS ASS COMPU, P5436, DOI [10.18653/v1/2022.findingsemnlp.398, DOI 10.18653/V1/2022.FINDINGSEMNLP.398, DOI 10.18653/V1/2022.FINDINGS-EMNLP.398]