Enhancement of the Performance of Large Language Models inDiabetes Education through Retrieval-Augmented Generation:Comparative Study

被引:1
作者
Wang, Dingqiao [1 ]
Liang, Jiangbo [1 ]
Ye, Jinguo [1 ]
Li, Jingni [1 ]
Li, Jingpeng [1 ]
Zhang, Qikai [1 ]
Hu, Qiuling [1 ]
Pan, Caineng [1 ]
Wang, Dongliang [1 ]
Liu, Zhong [1 ]
Shi, Wen [1 ]
Shi, Danli [2 ]
Li, Fei [1 ]
Qu, Bo [3 ]
Zheng, Yingfeng [1 ]
机构
[1] Sun Yat sen Univ, Zhongshan Ophthalm Ctr, Guangdong Prov Clin Res Ctr Ocular Dis, State Key Lab Ophthalmol,Guangdong Prov Key Lab Op, 07 Jinsui Rd, Guangzhou 510060, Peoples R China
[2] Hong Kong Polytech Univ, Res Ctr SHARP Vis, Hong Kong, Peoples R China
[3] Peking Univ Third Hosp, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
large language models; LLMs; retrieval-augmented generation; RAG; GPT-4.0; Claude-2; Google Bard; diabetes education;
D O I
10.2196/58041
中图分类号
R19 [保健组织与事业(卫生事业管理)];
学科分类号
摘要
Background: Large language models (LLMs) demonstrated advanced performance in processing clinical information. However,commercially available LLMs lack specialized medical knowledge and remain susceptible to generating inaccurate information.Given the need for self-management in diabetes, patients commonly seek information online. We introduce the Retrieval-augmentedInformation System for Enhancement (RISE) framework and evaluate its performance in enhancing LLMs to provide accurateresponses to diabetes-related inquiries.Objective: This study aimed to evaluate the potential of the RISE framework, an information retrieval and augmentation tool,to improve the LLM's performance to accurately and safely respond to diabetes-related inquiries.Methods: The RISE, an innovative retrieval augmentation framework, comprises 4 steps: rewriting query, information retrieval,summarization, and execution. Using a set of 43 common diabetes-related questions, we evaluated 3 base LLMs (GPT-4, AnthropicClaude 2, Google Bard) and their RISE-enhanced versions respectively. Assessments were conducted by clinicians for accuracyand comprehensiveness and by patients for understandability.Results: The integration of RISE significantly improved the accuracy and comprehensiveness of responses from all 3 baseLLMs. On average, the percentage of accurate responses increased by 12% (15/129) with RISE. Specifically, the rates of accurateresponses increased by 7% (3/43) for GPT-4, 19% (8/43) for Claude 2, and 9% (4/43) for Google Bard. The framework alsoenhanced response comprehensiveness, with mean scores improving by 0.44 (SD 0.10). Understandability was also enhanced by0.19 (SD 0.13) on average. Data collection was conducted from September 30, 2023 to February 5, 2024.Conclusions: The RISE significantly improves LLMs'performance in responding to diabetes-related inquiries, enhancingaccuracy, comprehensiveness, and understandability. These improvements have crucial implications for RISE's future role inpatient education and chronic illness self-management, which contributes to relieving medical resource pressures and raisingpublic awareness of medical knowledge.
引用
收藏
页数:12
相关论文
共 53 条
  • [11] diabetes, About us
  • [12] Feng Z, 2023, arXiv, DOI [10.1109/icassp48485.2024.10448015, DOI 10.1109/ICASSP48485.2024.10448015]
  • [13] Nutrition Policy and Healthy China 2030 Building
    Gao, Chao
    Xu, Jiao
    Liu, Yang
    Yang, Yuexin
    [J]. EUROPEAN JOURNAL OF CLINICAL NUTRITION, 2021, 75 (02) : 238 - 246
  • [14] Artificial intelligence-based text generators in hepatology: ChatGPT is just the beginning
    Ge, Jin
    Lai, Jennifer C.
    [J]. HEPATOLOGY COMMUNICATIONS, 2023, 7 (04)
  • [15] Development of a liver disease-Specific large language model chat Interface using retrieval augmented generation
    Ge, Jin
    Sun, Steve
    Owens, Joseph
    Galvez, Victor
    Gologorskaya, Oksana
    Lai, Jennifer C.
    Pletcher, Mark J.
    Lai, Ki
    [J]. HEPATOLOGY, 2024, 80 (05) : 1158 - 1168
  • [16] Golovneva O, 2023, Arxiv, DOI arXiv:2212.07919
  • [17] Accuracy and Reliability of Chatbot Responses to Physician Questions
    Goodman, Rachel S.
    Patrinely, J. Randall
    Stone, Cosby A.
    Zimmerman, Eli
    Donald, Rebecca R.
    Chang, Sam S.
    Berkowitz, Sean T.
    Finn, Avni P.
    Jahangir, Eiman
    Scoville, Elizabeth A.
    Reese, Tyler S.
    Friedman, Debra L.
    Bastarache, Julie A.
    van der Heijden, Yuri F.
    Wright, Jordan J.
    Ye, Fei
    Carter, Nicholas
    Alexander, Matthew R.
    Choe, Jennifer H.
    Chastain, Cody A.
    Zic, John A.
    Horst, Sara N.
    Turker, Isik
    Agarwal, Rajiv
    Osmundson, Evan
    Idrees, Kamran
    Kiernan, Colleen M.
    Padmanabhan, Chandrasekhar
    Bailey, Christina E.
    Schlegel, Cameron E.
    Chambless, Lola B.
    Gibson, Michael K.
    Osterman, Travis J.
    Wheless, Lee E.
    Johnson, Douglas B.
    [J]. JAMA NETWORK OPEN, 2023, 6 (10)
  • [18] Guu K, 2020, PR MACH LEARN RES, V119
  • [19] Performance of ChatGPT on Ophthalmology-Related Questions Across Various Examination Levels: Observational Study
    Haddad, Firas
    Saade, Joanna S.
    [J]. JMIR MEDICAL EDUCATION, 2024, 10
  • [20] Han TY, 2023, Arxiv, DOI [arXiv:2304.08247, DOI 10.48550/ARXIV.2304.08247]