Siebielec Julia, Ordak Michal, Oskroba Agata, Dworakowska Anna, Bujalska-Zadrozny Magdalena
Department of Pharmacotherapy and Pharmaceutical Care, Faculty of Pharmacy, Medical University of Warsaw, 02-091 Warsaw, Poland.
Healthcare (Basel). 2024 Aug 16;12(16):1637. doi: 10.3390/healthcare12161637.
BACKGROUND/OBJECTIVES: The use of artificial intelligence (AI) in education is dynamically growing, and models such as ChatGPT show potential in enhancing medical education. In Poland, to obtain a medical diploma, candidates must pass the Medical Final Examination, which consists of 200 questions with one correct answer per question, is administered in Polish, and assesses students' comprehensive medical knowledge and readiness for clinical practice. The aim of this study was to determine how ChatGPT-3.5 handles questions included in this exam.
This study considered 980 questions from five examination sessions of the Medical Final Examination conducted by the Medical Examination Center in the years 2022-2024. The analysis included the field of medicine, the difficulty index of the questions, and their type, namely theoretical versus case-study questions.
The average correct answer rate achieved by ChatGPT for the five examination sessions hovered around 60% and was lower ( < 0.001) than the average score achieved by the examinees. The lowest percentage of correct answers was in hematology (42.1%), while the highest was in endocrinology (78.6%). The difficulty index of the questions showed a statistically significant correlation with the correctness of the answers ( = 0.04). Questions for which ChatGPT-3.5 provided incorrect answers had a lower ( < 0.001) percentage of correct responses. The type of questions analyzed did not significantly affect the correctness of the answers ( = 0.46).
This study indicates that ChatGPT-3.5 can be an effective tool for assisting in passing the final medical exam, but the results should be interpreted cautiously. It is recommended to further verify the correctness of the answers using various AI tools.
背景/目的:人工智能(AI)在教育领域的应用正在蓬勃发展,ChatGPT等模型在提升医学教育方面展现出潜力。在波兰,要获得医学文凭,考生必须通过医学期末考试,该考试由200道题目组成,每题只有一个正确答案,考试语言为波兰语,旨在评估学生的综合医学知识以及临床实践准备情况。本研究的目的是确定ChatGPT-3.5如何应对该考试中的题目。
本研究纳入了医学考试中心在2022年至2024年期间举行的五次医学期末考试中的980道题目。分析内容包括医学领域、题目的难度指数及其类型,即理论题与案例分析题。
ChatGPT在这五次考试中的平均正确答案率徘徊在60%左右,低于考生的平均得分(<0.001)。正确答案比例最低的是血液学(42.1%),最高的是内分泌学(78.6%)。题目的难度指数与答案的正确性存在统计学显著相关性(=0.04)。ChatGPT-3.5给出错误答案的题目,其正确回答的比例更低(<0.001)。所分析的题目类型对答案的正确性没有显著影响(=0.46)。
本研究表明ChatGPT-3.5可以成为辅助通过医学期末考试的有效工具,但对结果的解读应谨慎。建议使用各种人工智能工具进一步核实答案的正确性。