Department of Pharmacology, National University of Singapore, Yong Loo Lin School of Medicine, Singapore, Singapore.
Med Teach. 2024 Aug;46(8):1021-1026. doi: 10.1080/0142159X.2023.2294703. Epub 2023 Dec 26.
BACKGROUND: Crafting quality assessment questions in medical education is a crucial yet time-consuming, expertise-driven undertaking that calls for innovative solutions. Large language models (LLMs), such as ChatGPT (Chat Generative Pre-Trained Transformer), present a promising yet underexplored avenue for such innovations. AIMS: This study explores the utility of ChatGPT to generate diverse, high-quality medical questions, focusing on multiple-choice questions (MCQs) as an illustrative example, to increase educator's productivity and enable self-directed learning for students. DESCRIPTION: Leveraging 12 strategies, we demonstrate how ChatGPT can be effectively used to generate assessment questions aligned with Bloom's taxonomy and core knowledge domains while promoting best practices in assessment design. CONCLUSION: Integrating LLM tools like ChatGPT into generating medical assessment questions like MCQs augments but does not replace human expertise. With continual instruction refinement, AI can produce high-standard questions. Yet, the onus of ensuring ultimate quality and accuracy remains with subject matter experts, affirming the irreplaceable value of human involvement in the artificial intelligence-driven education paradigm.
背景:在医学教育中编写高质量的评估问题是一项至关重要但耗时、需要专业知识的任务,需要创新的解决方案。大型语言模型(LLM),如 ChatGPT(聊天生成预训练转换器),为这种创新提供了一个有前途但尚未充分探索的途径。
目的:本研究探讨了 ChatGPT 在生成多样化、高质量医学问题方面的效用,重点是多项选择题(MCQ)作为一个说明性示例,以提高教育者的生产力,并为学生提供自我指导学习的能力。
描述:利用 12 种策略,我们展示了如何有效地使用 ChatGPT 生成符合布鲁姆分类法和核心知识领域的评估问题,同时促进评估设计的最佳实践。
结论:将 LLM 工具(如 ChatGPT)集成到生成医学评估问题(如 MCQ)中,可以增强但不能替代人类专业知识。通过不断的指令改进,人工智能可以生成高标准的问题。然而,确保最终质量和准确性的责任仍然在于主题专家,这肯定了人类在人工智能驱动的教育范式中的不可或缺的价值。
GMS J Med Educ. 2024
J Med Internet Res. 2024-1-23
BMC Med Educ. 2024-3-29
Front Med (Lausanne). 2025-4-28