Application of ChatGPT as a content generation tool in continuing medical education: acne as a test topic

被引:0
作者
Naldi, Luigi [1 ,2 ]
Bettoli, Vincenzo [2 ,3 ]
Santoro, Eugenio [4 ]
Valetto, Maria Rosa [5 ]
Bolzon, Anna [1 ,6 ]
Cassalia, Fortunato [1 ,6 ]
Cazzaniga, Simone [2 ,7 ]
Cima, Sergio [5 ]
Danese, Andrea [8 ]
Emendi, Silvia [5 ]
Ponzano, Monica [6 ]
Scarpa, Nicoletta [5 ]
Dri, Pietro [5 ]
机构
[1] Osped San Bortolo, Dermatol Unit, Vicenza, Italy
[2] Ctr Italian Grp Epidemiol Res Dermatol, Bergamo, Italy
[3] Univ Ferrara, Dept Med Sci, Sect Dermatol & Infect Dis, Ferrara, Italy
[4] Mario Negri Inst Pharmacol Res, Dept Clin Oncol, Unit Res Digital Hlth & Digital Therapeut, Milan, Italy
[5] Zadig Ltd Benefit Co, CME Natl Provider, Milan, Italy
[6] Univ Padua, Dept Med, Unit Dermatol, Padua, Italy
[7] Inselspital Univ Hosp Bern, Dept Dermatol, Bern, Switzerland
[8] Univ Verona, Dept Integrated Med & Gen Act, Unit Dermatol, Verona, Italy
关键词
acne; artificial intelligence; ChatGPT; medical informa-tion; medical education; large language models;
D O I
10.4081/dr.2024.10138
中图分类号
R75 [皮肤病学与性病学];
学科分类号
100206 ;
摘要
The large language model (LLM) ChatGPT can answer openended and complex questions, but its accuracy in providing reliable medical information requires a careful assessment. As part of the AI-CHECK (Artificial Intelligence for CME Health E-learning Contents and Knowledge) study, aimed at evaluating the potential of ChatGPT in continuous medical education (CME), we compared ChatGPT-generated educational content to the recommendations of the National Institute for Health and Care Excellence (NICE) guidelines on acne vulgaris. ChatGPT version 4 was exposed to a 23-item questionnaire developed by an experienced dermatologist. A panel of five dermatologists rated the answers positively in terms of "quality" (87.8%), "readability" (94.8%), "accuracy" (75.7%), "thoroughness" (85.2%), and "consistency" with guidelines (76.8%). The references provided by ChatGPT obtained positive ratings for "pertinence" (94.6%), "relevance" (91.2%), and "update" (62.3%). The internal reproducibility was adequate both for answers (93.5%) and references (67.4%). Answers related to issues of uncertainty and/or controversy in the scientific community scored the lowest. This study underscores the need to develop rigorous evaluation criteria for AI-generated medical content and for expert oversight to ensure accuracy and guideline adherence.
引用
收藏
页数:5
相关论文
共 25 条
[1]   ChatGPT and acne: Accuracy and reliability of the information provided-The AI-check study [J].
Bettoli, Vincenzo ;
Naldi, Luigi ;
Santoro, Eugenio ;
Valetto, Maria Rosa ;
Bolzon, Anna ;
Cassalia, Fortunato ;
Cazzaniga, Simone ;
Cima, Sergio ;
Danese, Andrea ;
Emendi, Silvia ;
Ponzano, Monica ;
Scarpa, Nicoletta ;
Dri, Pietro .
JOURNAL OF THE EUROPEAN ACADEMY OF DERMATOLOGY AND VENEREOLOGY, 2025, 39 (04) :e359-e362
[2]   Measuring quality of patient information documents with an expanded EQIP scale [J].
Charvet-Berard, A. I. ;
Chopard, P. ;
Perneger, T. V. .
PATIENT EDUCATION AND COUNSELING, 2008, 70 (03) :407-411
[3]   Use of Artificial Intelligence Chatbots for Cancer Treatment Information [J].
Chen, Shan ;
Kann, Benjamin H. ;
Foote, Michael B. ;
Aerts, Hugo J. W. L. ;
Savova, Guergana K. ;
Mak, Raymond H. ;
Bitterman, Danielle S. .
JAMA ONCOLOGY, 2023, 9 (10) :1459-1462
[4]  
Cirone Katrina, 2024, JMIR Dermatol, V7, pe55508, DOI 10.2196/55508
[5]   ChatGPT in medicine: an overview of its applications, advantages, limitations, future prospects, and ethical considerations [J].
Dave, Tirth ;
Athaluri, Sai Anirudh ;
Singh, Satyam .
FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2023, 6
[6]   The Role of ChatGPT, Generative Language Models, and Artificial Intelligence in Medical Education: A Conversation With ChatGPT and a Call for Papers [J].
Eysenbach, Gunther .
JMIR MEDICAL EDUCATION, 2023, 9
[7]  
Ferreira Alana L, 2023, JMIR Dermatol, V6, pe49280, DOI 10.2196/49280
[8]   Accuracy and Reliability of Chatbot Responses to Physician Questions [J].
Goodman, Rachel S. ;
Patrinely, J. Randall ;
Stone, Cosby A. ;
Zimmerman, Eli ;
Donald, Rebecca R. ;
Chang, Sam S. ;
Berkowitz, Sean T. ;
Finn, Avni P. ;
Jahangir, Eiman ;
Scoville, Elizabeth A. ;
Reese, Tyler S. ;
Friedman, Debra L. ;
Bastarache, Julie A. ;
van der Heijden, Yuri F. ;
Wright, Jordan J. ;
Ye, Fei ;
Carter, Nicholas ;
Alexander, Matthew R. ;
Choe, Jennifer H. ;
Chastain, Cody A. ;
Zic, John A. ;
Horst, Sara N. ;
Turker, Isik ;
Agarwal, Rajiv ;
Osmundson, Evan ;
Idrees, Kamran ;
Kiernan, Colleen M. ;
Padmanabhan, Chandrasekhar ;
Bailey, Christina E. ;
Schlegel, Cameron E. ;
Chambless, Lola B. ;
Gibson, Michael K. ;
Osterman, Travis J. ;
Wheless, Lee E. ;
Johnson, Douglas B. .
JAMA NETWORK OPEN, 2023, 6 (10)
[9]   Ethical considerations for artificial intelligence in dermatology: a scoping review [J].
Gordon, Emily R. ;
Trager, Megan H. ;
Kontos, Despina ;
Weng, Chunhua ;
Geskin, Larisa J. ;
Dugdale, Lydia S. ;
Samie, Faramarz H. .
BRITISH JOURNAL OF DERMATOLOGY, 2024, 190 (06) :789-797
[10]  
Lakdawala Nehal, 2023, JMIR Dermatol, V6, pe50409, DOI 10.2196/50409