Learning to match patients to clinical trials using large language models

被引:3
作者
Rybinski, Maciej [1 ]
Kusa, Wojciech [2 ]
Karimi, Sarvnaz [1 ]
Hanbury, Allan [2 ]
机构
[1] CSIRO Data61, 26 Pembroke Rd, Marsfield, NSW 2122, Australia
[2] TU Wien, Favoritenstr 9-11, A-1040 Vienna, Austria
基金
欧盟地平线“2020”;
关键词
Clinical trials; Patient to trials matching; TCRR; TREC CT; Large language models; Information retrieval; Learning-to-rank;
D O I
10.1016/j.jbi.2024.104734
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Objective: This study investigates the use of Large Language Models (LLMs) for matching patients to clinical trials (CTs) within an information retrieval pipeline. Our objective is to enhance the process of patient-trial matching by leveraging the semantic processing capabilities of LLMs, thereby improving the effectiveness of patient recruitment for clinical trials. Methods: We employed a multi-stage retrieval pipeline integrating various methodologies, including BM25 and Transformer-based rankers, along with LLM-based methods. Our primary datasets were the TREC Clinical Trials 2021-23 track collections. We compared LLM-based approaches, focusing on methods that leverage LLMs in query formulation, filtering, relevance ranking, and re-ranking of CTs. Results: Our results indicate that LLM-based systems, particularly those involving re-ranking with a fine-tuned LLM, outperform traditional methods in terms of nDCG and Precision measures. The study demonstrates that fine-tuning LLMs enhances their ability to find eligible trials. Moreover, our LLM-based approach is competitive with state-of-the-art systems in the TREC challenges. The study shows the effectiveness of LLMs in CT matching, highlighting their potential in handling complex semantic analysis and improving patient-trial matching. However, the use of LLMs increases the computational cost and reduces efficiency. We provide a detailed analysis of effectiveness-efficiency trade-offs. Conclusion: This research demonstrates the promising role of LLMs in enhancing the patient-to-clinical trial matching process, offering a significant advancement in the automation of patient recruitment. Future work should explore optimising the balance between computational cost and retrieval effectiveness in practical applications.
引用
收藏
页数:12
相关论文
共 50 条
[31]   Scientific Evidence for Clinical Text Summarization Using Large Language Models: Scoping Review [J].
Bednarczyk, Lydie ;
Reichenpfader, Daniel ;
Gaudet-Blavignac, Christophe ;
Ette, Amon Kenna ;
Zaghir, Jamil ;
Zheng, Yuanyuan ;
Bensahla, Adel ;
Bjelogrlic, Mina ;
Lovis, Christian .
JOURNAL OF MEDICAL INTERNET RESEARCH, 2025, 27
[32]   Dual Adapter Tuning of Vision-Language Models Using Large Language Models [J].
Zarei, Mohammad Reza ;
Akkasi, Abbas ;
Komeili, Majid .
INTERNATIONAL JOURNAL OF COMPUTATIONAL INTELLIGENCE SYSTEMS, 2025, 18 (01)
[33]   Contrastive Learning for Morphological Disambiguation Using Large Language Models in Low-Resource Settings [J].
Tolegen, Gulmira ;
Toleu, Alymzhan ;
Mussabayev, Rustam .
APPLIED SCIENCES-BASEL, 2024, 14 (21)
[34]   Learning of design for environment with large language models: An interactive system using GPT-4 [J].
Hara II, Tatsunori ;
Kawamura, Taisei ;
Goto, Miwako ;
Ota, Jun .
CIRP ANNALS-MANUFACTURING TECHNOLOGY, 2025, 74 (01) :197-201
[35]   Using Large Language Models to Support Teaching and Learning of Word Problem Solving in Tutoring Systems [J].
Arnau-Blasco, Jaime ;
Arevalillo-Herraez, Miguel ;
Solera-Monforte, Sergi ;
Wu, Yuyan .
GENERATIVE INTELLIGENCE AND INTELLIGENT TUTORING SYSTEMS, PT I, ITS 2024, 2024, 14798 :3-13
[36]   Optimizing Novelty of Top-k Recommendations using Large Language Models and Reinforcement Learning [J].
Sharma, Amit ;
Li, Hua ;
Li, Xue ;
Jiao, Jian .
PROCEEDINGS OF THE 30TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2024, 2024, :5669-5679
[37]   Optimizing Agent Behavior in the MiniGrid Environment Using Reinforcement Learning Based on Large Language Models [J].
Park, Byeong-Ju ;
Yong, Sung-Jung ;
Hwang, Hyun-Seo ;
Moon, Il-Young .
APPLIED SCIENCES-BASEL, 2025, 15 (04)
[38]   The Problem of Concept Learning and Goals of Reasoning in Large Language Models [J].
Chuganskaya, Anfisa A. ;
Kovalev, Alexey K. ;
Panov, Aleksandr .
HYBRID ARTIFICIAL INTELLIGENT SYSTEMS, HAIS 2023, 2023, 14001 :661-672
[39]   The Future of Learning: Large Language Models through the Lens of Students [J].
Zhang, He ;
Xie, Jingyi ;
Wu, Chuhao ;
Cai, Jie ;
Kim, ChanMin ;
Carroll, John M. .
PROCEEDINGS OF 2024 25TH ANNUAL CONFERENCE ON INFORMATION TECHNOLOGY EDUCATION, SIGITE 2024, 2024, :12-18
[40]   Adaptive In-Context Learning with Large Language Models for Bundle [J].
Sun, Zhu ;
Feng, Kaidong ;
Yang, Jie ;
Qu, Xinghua ;
Fang, Hui ;
Ong, Yew-Soon ;
Liu, Wenyuan .
PROCEEDINGS OF THE 47TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2024, 2024, :966-976