Large Language Models With Holistically Thought Could Be Better Doctors

被引:0
作者
Weng, Yixuan [1 ]
Li, Bin [2 ]
Xia, Fei [1 ,3 ]
Zhu, Minjun [1 ,3 ]
Sun, Bin [4 ]
He, Shizhu [1 ,3 ]
Liu, Shengping [5 ]
Li, Kang [1 ,3 ,6 ]
Li, Shutao [4 ]
Zhao, Jun [1 ,3 ]
机构
[1] Chinese Acad Sci, IA, Lab Cognit & Decis Intelligence Complex Syst, Beijing, Peoples R China
[2] Chinese Acad Sci, Shenzhen Inst Adv Technol, Beijing, Peoples R China
[3] Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing, Peoples R China
[4] Hunan Univ, Coll Elect & Informat Engn, Changsha, Peoples R China
[5] Unisound, Beijing, Peoples R China
[6] Shanghai Artificial Intelligence Lab, Shanghai, Peoples R China
来源
NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING, PT II, NLPCC 2024 | 2025年 / 15360卷
基金
中国国家自然科学基金;
关键词
Large Language Model; Medical Conversational QA; Holistically Thought;
D O I
10.1007/978-981-97-9434-8_25
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The medical conversational question answering (CQA) system aims at providing a series of professional medical services to improve the efficiency of medical care. Despite the success of large language models (LLMs) in complex reasoning tasks in various fields, such as mathematics, logic, and commonsense QA, they still need to improve with the increased complexity and specialization of the medical field. This is because medical CQA tasks require not only strong medical reasoning, but also the ability to think broadly and deeply. In this paper, to address these challenges in medical CQA tasks that need to be considered and understood in many aspects, we propose the Holistically Thought (HoT) method, which is designed to guide the LLMs to perform the diffused and focused thinking for generating high-quality medical responses. The proposed HoT method has been evaluated in three different medical CQA datasets containing English and Chinese languages. The extensive experimental results show that our method can produce more correct, professional, and considerate answers than several SOTA methods, manifesting its effectiveness.
引用
收藏
页码:319 / 332
页数:14
相关论文
共 55 条
[1]  
Ahn M, 2022, Arxiv, DOI arXiv:2204.01691
[2]  
Brown TB, 2020, ADV NEUR IN, V33
[3]  
Budzianowski P., 2019, Empirical Methods in Natural Language Processing
[4]  
Dong QX, 2024, Arxiv, DOI [arXiv:2301.00234, 10.48550/arXiv.2301.00234]
[5]  
Feldman J, 2019, 2019 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING AND THE 9TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (EMNLP-IJCNLP 2019), P1173
[6]   How Can We Know What Language Models Know? [J].
Jiang, Zhengbao ;
Xu, Frank F. ;
Araki, Jun ;
Neubig, Graham .
TRANSACTIONS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, 2020, 8 :423-438
[7]  
Kojima T, 2022, Arxiv, DOI [arXiv:2205.11916, DOI 10.48550/ARXIV.2205.11916]
[8]   Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models [J].
Kung, Tiffany H. ;
Cheatham, Morgan ;
Medenilla, Arielle ;
Sillos, Czarina ;
De Leon, Lorie ;
Elepano, Camille ;
Madriaga, Maria ;
Aggabao, Rimel ;
Diaz-Candido, Giezel ;
Maningo, James ;
Tseng, Victor .
PLOS DIGITAL HEALTH, 2023, 2 (02)
[9]   Doctor-patient communication: Principles and practices [J].
Kurtz, SM .
CANADIAN JOURNAL OF NEUROLOGICAL SCIENCES, 2002, 29 :S23-S29
[10]  
Li B., 2023, ICASSP 2023, P1