Learning to match patients to clinical trials using large language models

被引:3
作者
Rybinski, Maciej [1 ]
Kusa, Wojciech [2 ]
Karimi, Sarvnaz [1 ]
Hanbury, Allan [2 ]
机构
[1] CSIRO Data61, 26 Pembroke Rd, Marsfield, NSW 2122, Australia
[2] TU Wien, Favoritenstr 9-11, A-1040 Vienna, Austria
基金
欧盟地平线“2020”;
关键词
Clinical trials; Patient to trials matching; TCRR; TREC CT; Large language models; Information retrieval; Learning-to-rank;
D O I
10.1016/j.jbi.2024.104734
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Objective: This study investigates the use of Large Language Models (LLMs) for matching patients to clinical trials (CTs) within an information retrieval pipeline. Our objective is to enhance the process of patient-trial matching by leveraging the semantic processing capabilities of LLMs, thereby improving the effectiveness of patient recruitment for clinical trials. Methods: We employed a multi-stage retrieval pipeline integrating various methodologies, including BM25 and Transformer-based rankers, along with LLM-based methods. Our primary datasets were the TREC Clinical Trials 2021-23 track collections. We compared LLM-based approaches, focusing on methods that leverage LLMs in query formulation, filtering, relevance ranking, and re-ranking of CTs. Results: Our results indicate that LLM-based systems, particularly those involving re-ranking with a fine-tuned LLM, outperform traditional methods in terms of nDCG and Precision measures. The study demonstrates that fine-tuning LLMs enhances their ability to find eligible trials. Moreover, our LLM-based approach is competitive with state-of-the-art systems in the TREC challenges. The study shows the effectiveness of LLMs in CT matching, highlighting their potential in handling complex semantic analysis and improving patient-trial matching. However, the use of LLMs increases the computational cost and reduces efficiency. We provide a detailed analysis of effectiveness-efficiency trade-offs. Conclusion: This research demonstrates the promising role of LLMs in enhancing the patient-to-clinical trial matching process, offering a significant advancement in the automation of patient recruitment. Future work should explore optimising the balance between computational cost and retrieval effectiveness in practical applications.
引用
收藏
页数:12
相关论文
共 50 条
[1]   Distilling large language models for matching patients to clinical trials [J].
Nievas, Mauro ;
Basu, Aditya ;
Wang, Yanshan ;
Singh, Hrituraj .
JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION, 2024, 31 (09) :1953-1963
[2]   Evaluation of Clinical Trials Reporting Quality using Large Language Models [J].
Lai-king, Mathieu ;
Paroubek, Patrick .
TRAITEMENT AUTOMATIQUE DES LANGUES, 2024, 65 (02) :13-38
[3]   From RAGs to riches: Utilizing large language models to write documents for clinical trials [J].
Markey, Nigel ;
El-Mansouri, Ilyass ;
Rensonnet, Gaetan ;
van Langen, Casper ;
Meier, Christoph .
CLINICAL TRIALS, 2025,
[4]   Large language models for automating clinical trial matching [J].
Layne, Ethan ;
Olivas, Claire ;
Hershenhouse, Jacob ;
Ganjavi, Conner ;
Cei, Francesco ;
Gill, Inderbir ;
Cacciamani, Giovanni E. .
CURRENT OPINION IN UROLOGY, 2025, 35 (03) :250-258
[5]   Plan, Generate and Match: Scientific Workflow Recommendation with Large Language Models [J].
Gu, Yang ;
Cao, Jian ;
Guo, Yuan ;
Qian, Shiyou ;
Guan, Wei .
SERVICE-ORIENTED COMPUTING, ICSOC 2023, PT I, 2023, 14419 :86-102
[6]   Using large language models for safety-related table summarization in clinical study reports [J].
Landman, Rogier ;
Healey, Sean P. ;
Loprinzo, Vittorio ;
Kochendoerfer, Ulrike ;
Winnier, Angela Russell ;
Henstock, Peter, V ;
Lin, Wenyi ;
Chen, Aqiu ;
Rajendran, Arthi ;
Penshanwar, Sushant ;
Khan, Sheraz ;
Madhavan, Subha .
JAMIA OPEN, 2024, 7 (02)
[7]   Federated and edge learning for large language models [J].
Piccialli, Francesco ;
Chiaro, Diletta ;
Qi, Pian ;
Bellandi, Valerio ;
Damiani, Ernesto .
INFORMATION FUSION, 2025, 117
[8]   Tool learning with large language models: a survey [J].
Qu, Changle ;
Dai, Sunhao ;
Wei, Xiaochi ;
Cai, Hengyi ;
Wang, Shuaiqiang ;
Yin, Dawei ;
Xu, Jun ;
Wen, Ji-rong .
FRONTIERS OF COMPUTER SCIENCE, 2025, 19 (08)
[9]   An Investigation of Applying Large Language Models to Spoken Language Learning [J].
Gao, Yingming ;
Nuchged, Baorian ;
Li, Ya ;
Peng, Linkai .
APPLIED SCIENCES-BASEL, 2024, 14 (01)
[10]   Large Language Models Demonstrate the Potential of Statistical Learning in Language [J].
Contreras Kallens, Pablo ;
Kristensen-McLachlan, Ross Deans ;
Christiansen, Morten H. .
COGNITIVE SCIENCE, 2023, 47 (03) :e13256