Suppr超能文献

一年级医学生理学课程中选择题形式的比较

Comparison of Multiple-Choice Question Formats in a First Year Medical Physiology Course.

作者信息

Wilson L Britt, DiStefano Christine, Wang Huijuan, Blanck Erika L

机构信息

Department of Pharmacology, Physiology and Neuroscience, University of South Carolina School of Medicine, Columbia, SC, USA.

Department of Educational and Developmental Science, College of Education, University of South Carolina, Columbia, SC.

出版信息

J CME. 2024 Aug 12;13(1):2390264. doi: 10.1080/28338073.2024.2390264. eCollection 2024.

Abstract

The purpose of this study was to compare student performance and question discrimination of multiple-choice questions (MCQs) that followed a standard format (SF) versus those that do not follow a SF, termed here as non-standard format (NSF). Medical physiology exam results of approximately 500 first-year medical students collected over a five-year period (2020-2024) were used. Classical test theory item analysis indices, e.g. discrimination (D), point-biserial correlation (r), distractor analysis for non-functional distractors (NFDs), and difficulty (p) were determined and compared across MCQ format types. The results presented here are the mean ± standard error of the mean (SEM). The analysis showed that D (0.278 ± 0.008 vs 0.228 ± 0.006) and r (0.291 ± .006 vs 0.273 ± .006) were significantly higher for NSF questions compared to SF questions, indicating NSF questions provided more discriminatory power. In addition, the percentage of NFDs was lower for the NSF items compared to the SF ones (58.3 ± 0.019% vs 70.2 ± 0.015%). Also, the NSF questions proved to be more difficult relative to the SF questions ( = 0.741 ± 0.007 for NSF;  = 0.809 ± 0.006 for SF). Thus, the NSF questions discriminated better, had fewer NFDs, and were more difficult than SF questions. These data suggest that using the selected non-standard item writing questions can enhance the ability to discriminate higher performers from lower performers on MCQs as well as provide more rigour for exams.

摘要

本研究的目的是比较遵循标准格式(SF)的多项选择题(MCQ)与不遵循标准格式(本文称为非标准格式,NSF)的多项选择题的学生表现和题目区分度。我们使用了在五年期间(2020 - 2024年)收集的约500名一年级医学生的医学生理学考试成绩。确定并比较了经典测试理论题目分析指标,如区分度(D)、点二列相关系数(r)、非功能性干扰项(NFD)的干扰项分析以及难度(p),涵盖不同格式类型的MCQ。此处呈现的结果是均值±均值标准误差(SEM)。分析表明,与SF题目相比,NSF题目的D(0.278 ± 0.008对0.228 ± 0.006)和r(0.291 ± 0.006对0.273 ± 0.006)显著更高,表明NSF题目具有更强的区分能力。此外,NSF题目的NFD百分比低于SF题目(58.3 ± 0.019%对70.2 ± 0.015%)。而且,相对于SF题目,NSF题目更难(NSF为0.741 ± 0.007;SF为0.809 ± 0.006)。因此,NSF题目区分度更好,NFD更少,且比SF题目更难。这些数据表明,使用选定的非标准题目编写方式可以提高在MCQ中区分高分与低分学生的能力,同时使考试更加严格。

相似文献

2
Writing Multiple Choice Questions-Has the Student Become the Master?编写多项选择题——学生是否已经成为主人?
Teach Learn Med. 2023 Jun-Jul;35(3):356-367. doi: 10.1080/10401334.2022.2050240. Epub 2022 May 1.

本文引用的文献

4
Writing multiple-choice questions.编写多项选择题。
Acad Psychiatry. 2010 Jul-Aug;34(4):310-6. doi: 10.1176/appi.ap.34.4.310.
9
Cuing effect of "all of the above" on the reliability and validity of multiple-choice test items.
Eval Health Prof. 1998 Mar;21(1):120-33. doi: 10.1177/016327879802100106.
10
Multiple-choice testing in anatomy.解剖学中的多项选择题测试。
Med Educ. 1992 Jul;26(4):301-9. doi: 10.1111/j.1365-2923.1992.tb00173.x.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验