Human Communication, Learning, and Development Unit, Faculty of Education, The University of Hong Kong, Hong Kong, China (Hong Kong).
Department of Otorhinolaryngology, Head and Neck Surgery, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong, China (Hong Kong).
JMIR Med Educ. 2024 Apr 26;10:e55595. doi: 10.2196/55595.
Artificial intelligence (AI) chatbots, such as ChatGPT-4, have shown immense potential for application across various aspects of medicine, including medical education, clinical practice, and research.
This study aimed to evaluate the performance of ChatGPT-4 in the 2023 Taiwan Audiologist Qualification Examination, thereby preliminarily exploring the potential utility of AI chatbots in the fields of audiology and hearing care services.
ChatGPT-4 was tasked to provide answers and reasoning for the 2023 Taiwan Audiologist Qualification Examination. The examination encompassed six subjects: (1) basic auditory science, (2) behavioral audiology, (3) electrophysiological audiology, (4) principles and practice of hearing devices, (5) health and rehabilitation of the auditory and balance systems, and (6) auditory and speech communication disorders (including professional ethics). Each subject included 50 multiple-choice questions, with the exception of behavioral audiology, which had 49 questions, amounting to a total of 299 questions.
The correct answer rates across the 6 subjects were as follows: 88% for basic auditory science, 63% for behavioral audiology, 58% for electrophysiological audiology, 72% for principles and practice of hearing devices, 80% for health and rehabilitation of the auditory and balance systems, and 86% for auditory and speech communication disorders (including professional ethics). The overall accuracy rate for the 299 questions was 75%, which surpasses the examination's passing criteria of an average 60% accuracy rate across all subjects. A comprehensive review of ChatGPT-4's responses indicated that incorrect answers were predominantly due to information errors.
ChatGPT-4 demonstrated a robust performance in the Taiwan Audiologist Qualification Examination, showcasing effective logical reasoning skills. Our results suggest that with enhanced information accuracy, ChatGPT-4's performance could be further improved. This study indicates significant potential for the application of AI chatbots in audiology and hearing care services.
人工智能(AI)聊天机器人,如 ChatGPT-4,在医学的各个领域,包括医学教育、临床实践和研究中,显示出了巨大的应用潜力。
本研究旨在评估 ChatGPT-4 在 2023 年台湾听力学家资格考试中的表现,从而初步探讨 AI 聊天机器人在听力学和听力保健服务领域的潜在应用。
要求 ChatGPT-4 回答和推理 2023 年台湾听力学家资格考试的问题。考试涵盖六个科目:(1)基础听觉科学,(2)行为听力学,(3)电生理听力学,(4)听力设备的原理和实践,(5)听觉和平衡系统的健康和康复,(6)听觉和言语交流障碍(包括职业道德)。每个科目都有 50 道多项选择题,行为听力学除外,有 49 道题,共计 299 道题。
六个科目的正确答案率如下:基础听觉科学 88%,行为听力学 63%,电生理听力学 58%,听力设备的原理和实践 72%,听觉和平衡系统的健康和康复 80%,听觉和言语交流障碍(包括职业道德)86%。299 道题的总准确率为 75%,超过了考试所有科目平均 60%准确率的及格标准。综合审查 ChatGPT-4 的回答表明,错误答案主要是由于信息错误。
ChatGPT-4 在台湾听力学家资格考试中表现出了强大的性能,展示了有效的逻辑推理能力。我们的结果表明,通过提高信息准确性,ChatGPT-4 的性能可以进一步提高。这项研究表明,人工智能聊天机器人在听力学和听力保健服务中有很大的应用潜力。