L.K.S. Faculty of Medicine, University of Hong Kong, Hong Kong, Hong Kong S.A.R.
Department of Surgery, University of Edinburgh, Edinburgh, United Kingdom.
PLoS One. 2023 Aug 29;18(8):e0290691. doi: 10.1371/journal.pone.0290691. eCollection 2023.
Large language models, in particular ChatGPT, have showcased remarkable language processing capabilities. Given the substantial workload of university medical staff, this study aims to assess the quality of multiple-choice questions (MCQs) produced by ChatGPT for use in graduate medical examinations, compared to questions written by university professoriate staffs based on standard medical textbooks.
50 MCQs were generated by ChatGPT with reference to two standard undergraduate medical textbooks (Harrison's, and Bailey & Love's). Another 50 MCQs were drafted by two university professoriate staff using the same medical textbooks. All 100 MCQ were individually numbered, randomized and sent to five independent international assessors for MCQ quality assessment using a standardized assessment score on five assessment domains, namely, appropriateness of the question, clarity and specificity, relevance, discriminative power of alternatives, and suitability for medical graduate examination.
The total time required for ChatGPT to create the 50 questions was 20 minutes 25 seconds, while it took two human examiners a total of 211 minutes 33 seconds to draft the 50 questions. When a comparison of the mean score was made between the questions constructed by A.I. with those drafted by humans, only in the relevance domain that the A.I. was inferior to humans (A.I.: 7.56 +/- 0.94 vs human: 7.88 +/- 0.52; p = 0.04). There was no significant difference in question quality between questions drafted by A.I. versus humans, in the total assessment score as well as in other domains. Questions generated by A.I. yielded a wider range of scores, while those created by humans were consistent and within a narrower range.
ChatGPT has the potential to generate comparable-quality MCQs for medical graduate examinations within a significantly shorter time.
大型语言模型,尤其是 ChatGPT,展现出了卓越的语言处理能力。鉴于高校医学教师的工作量很大,本研究旨在评估 ChatGPT 根据标准医学教科书编写的多项选择题 (MCQ) 的质量,与高校教职员工编写的问题进行比较。
ChatGPT 参考两本标准的本科医学教科书(《哈里森内科学》和《贝利儿爱情解剖学》)生成了 50 个 MCQ。另外 50 个 MCQ 由两位高校教职员工根据相同的医学教科书起草。这 100 个 MCQ 分别编号、随机化,并分发给五名独立的国际评估员,使用五个评估领域的标准化评估分数对 MCQ 质量进行评估,即问题的适当性、清晰度和特异性、相关性、替代方案的区分能力和对医学研究生考试的适用性。
ChatGPT 生成 50 个问题总共用时 20 分钟 25 秒,而两位人类考官总共用时 211 分钟 33 秒起草 50 个问题。当比较人工智能构建的问题与人类起草的问题的平均分时,只有在相关性领域人工智能不如人类(人工智能:7.56 +/- 0.94 与人类:7.88 +/- 0.52;p = 0.04)。在总评估分数以及其他领域,人工智能起草的问题与人类起草的问题之间的问题质量没有差异。人工智能生成的问题得分范围更广,而人类创建的问题则一致且得分范围较窄。
ChatGPT 有潜力在更短的时间内为医学研究生考试生成具有相当质量的 MCQ。