Exploring the Performance of ChatGPT-4 in the Taiwan Audiologist Qualification Examination: Preliminary Observational Study Highlighting the Potential of AI Chatbots in Hearing Care

被引:3
|
作者
Wang, Shangqiguo [1 ]
Mo, Changgeng [2 ]
Chen, Yuan [3 ]
Dai, Xiaolu [4 ]
Wang, Huiyi [5 ]
Shen, Xiaoli [6 ]
机构
[1] Univ Hong Kong, Fac Educ, Human Commun Learning & Dev Unit, Hong Kong, Peoples R China
[2] Chinese Univ Hong Kong, Fac Med, Dept Otorhinolaryngol Head & Neck Surg, Hong Kong, Peoples R China
[3] Educ Univ Hong Kong, Dept Special Educ & Counselling, Hong Kong, Peoples R China
[4] Hong Kong Baptist Univ, Dept Social Work, Hong Kong, Peoples R China
[5] Zhejiang Univ, Childrens Hosp, Sch Med, Dept Med Serv, Hangzhou, Peoples R China
[6] Ningbo Coll, Hlth Sch, Dept Hlth & Early Childhood Care, Ningbo, Peoples R China
来源
JMIR MEDICAL EDUCATION | 2024年 / 10卷
基金
英国科研创新办公室;
关键词
ChatGPT; medical education; artificial intelligence; AI; audiology; hearing care; natural language processing; large language model; Taiwan; hearing; hearing specialist; audiologist; examination; information accuracy; educational technology; healthcare services; chatbot; health care services;
D O I
10.2196/55595
中图分类号
G40 [教育学];
学科分类号
040101 ; 120403 ;
摘要
Background: Artificial intelligence (AI) chatbots, such as ChatGPT-4, have shown immense potential for application across various aspects of medicine, including medical education, clinical practice, and research. Objective: This study aimed to evaluate the performance of ChatGPT-4 in the 2023 Taiwan Audiologist Qualification Examination, thereby preliminarily exploring the potential utility of AI chatbots in the fields of audiology and hearing care services. Methods: ChatGPT-4 was tasked to provide answers and reasoning for the 2023 Taiwan Audiologist Qualification Examination. The examination encompassed six subjects: (1) basic auditory science, (2) behavioral audiology, (3) electrophysiological audiology, (4) principles and practice of hearing devices, (5) health and rehabilitation of the auditory and balance systems, and (6) auditory and speech communication disorders (including professional ethics). Each subject included 50 multiple-choice questions, with the exception of behavioral audiology, which had 49 questions, amounting to a total of 299 questions. Results: The correct answer rates across the 6 subjects were as follows: 88% for basic auditory science, 63% for behavioral audiology, 58% for electrophysiological audiology, 72% for principles and practice of hearing devices, 80% for health and rehabilitation of the auditory and balance systems, and 86% for auditory and speech communication disorders (including professional ethics). The overall accuracy rate for the 299 questions was 75%, which surpasses the examination's passing criteria of an average 60% accuracy rate across all subjects. A comprehensive review of ChatGPT-4's responses indicated that incorrect answers were predominantly due to information errors. Conclusions: ChatGPT-4 demonstrated a robust performance in the Taiwan Audiologist Qualification Examination, showcasing effective logical reasoning skills. Our results suggest that with enhanced information accuracy, ChatGPT-4's performance could be further improved. This study indicates significant potential for the application of AI chatbots in audiology and hearing care services.
引用
收藏
页数:10
相关论文
共 4 条
  • [1] Performance of ChatGPT-3.5 and ChatGPT-4 in the Taiwan National Pharmacist Licensing Examination: Comparative Evaluation Study
    Wang, Ying-Mei
    Shen, Hung-Wei
    Chen, Tzeng-Ji
    Chiang, Shu-Chiung
    Lin, Ting-Guan
    JMIR MEDICAL EDUCATION, 2025, 11
  • [2] Exploring the Performance of ChatGPT Versions 3.5, 4, and 4 With Vision in the Chilean Medical Licensing Examination: Observational Study
    Rojas, Marcos
    Rojas, Marcelo
    Burgess, Valentina
    Toro-Perez, Javier
    Salehi, Shima
    JMIR MEDICAL EDUCATION, 2024, 10
  • [3] Exploring the Potential of ChatGPT-4 in Responding to Common Questions About Abdominoplasty: An AI-Based Case Study of a Plastic Surgery Consultation
    Wenbo Li
    Junjiang Chen
    Fengmin Chen
    Jiaqing Liang
    Hongyu Yu
    Aesthetic Plastic Surgery, 2024, 48 : 1571 - 1583
  • [4] Exploring the Potential of ChatGPT-4 in Responding to Common Questions About Abdominoplasty: An AI-Based Case Study of a Plastic Surgery Consultation
    Li, Wenbo
    Chen, Junjiang
    Chen, Fengmin
    Liang, Jiaqing
    Yu, Hongyu
    AESTHETIC PLASTIC SURGERY, 2024, 48 (08) : 1571 - 1583