Safadi Sami, Amirahmadi Roxana, Tlimat Abdulhakim, Rovinski Randal, Sun Junfeng, Lee Burton W, Seam Nitin
Division of Nephrology and Hypertension, University of Minnesota, Minneapolis, MN; Division of Pulmonary, Allergy, Critical Care and Sleep Medicine, University of Minnesota, Minneapolis, Minnesota.
Department of Critical Care Medicine, National Institutes of Health, Bethesda, Maryland.
Chest. 2025 Jul 18. doi: 10.1016/j.chest.2025.07.005.
Mechanical ventilation (MV) is a critical competency in critical care training, yet standardized methods for assessing MV-related knowledge are lacking. Traditional multiple-choice question (MCQ) development is resource-intensive, and prior studies have suggested that generative AI tools could streamline question creation. However, the quality of AI-generated MCQs remains unclear.
Are MCQs generated by ChatGPT non-inferior to human-expert (HE) created questions in terms of quality and relevance for MV education.
Three key MV topics were selected: Equation of Motion & Ohm's Law, Tau & Auto PEEP, and Oxygenation. Fifteen learning objectives were used to generate 15 AI-written MCQs via a standardized prompt with ChatGPT (model o1-preview-2024-09-12). A group of 31 faculty experts, all of whom regularly teach MV, evaluated both AI-generated and HE-generated MCQs. Each MCQ was assessed based on its alignment with learning objectives, accuracy of chosen answer, clarity of stem, plausibility of distractors, and difficulty level. The faculty members were blinded to the provenance of the MCQ questions. The non-inferiority margin was predefined as 15% of the total possible score (-3.45).
AI-generated MCQs were statistically non-inferior to expert-written MCQs (95% upper CI: [-1.15, ∞]). Additionally, respondents were unable to reliably differentiate AI-generated from HE-written MCQs (p = 0.32).
AI-generated MCQs using ChatGPT o1 are comparable in quality to those written by human experts. Given the time and resource-intensive nature of human MCQ development, AI-assisted question generation may serve as an efficient and scalable alternative for medical education assessment, even in highly specialized domains such as mechanical ventilation.
None.
机械通气(MV)是重症监护培训中的一项关键技能,但缺乏评估与MV相关知识的标准化方法。传统的多项选择题(MCQ)开发资源密集,先前的研究表明生成式人工智能工具可以简化问题创建。然而,人工智能生成的MCQ的质量仍不明确。
就MV教育的质量和相关性而言,ChatGPT生成的MCQ是否不逊色于人类专家(HE)编写的问题。
选择了三个关键的MV主题:运动方程与欧姆定律、时间常数与自动呼气末正压、氧合。通过与ChatGPT(模型o1-preview-2024-09-12)的标准化提示,使用15个学习目标生成了15道人工智能编写的MCQ。一组31名教师专家(他们都定期教授MV)对人工智能生成的和HE生成的MCQ进行了评估。每个MCQ根据其与学习目标的一致性、所选答案的准确性、题干的清晰度、干扰项的合理性和难度水平进行评估。教师对MCQ问题的来源不知情。非劣效性界限预先定义为总可能得分的15%(-3.45)。
人工智能生成的MCQ在统计学上不逊色于专家编写的MCQ(95%上置信区间:[-1.15, ∞])。此外,受访者无法可靠地区分人工智能生成的和HE编写的MCQ(p = 0.32)。
使用ChatGPT o1生成的MCQ在质量上与人类专家编写的相当。鉴于人类MCQ开发的时间和资源密集性质,人工智能辅助问题生成可能是医学教育评估的一种高效且可扩展的替代方法,即使在机械通气等高度专业化领域也是如此。
无。