Suppr超能文献

多项选择题的项目分析:一种评估工具的质量保证测试。

Item analysis of multiple choice questions: A quality assurance test for an assessment tool.

作者信息

Kumar Dharmendra, Jaipurkar Raksha, Shekhar Atul, Sikri Gaurav, Srinivas V

机构信息

Assistant Professor, Department of Physiology, Armed Forces Medical College, Pune, India.

Associate Professor, Department of Physiology, Armed Forces Medical College, Pune, India.

出版信息

Med J Armed Forces India. 2021 Feb;77(Suppl 1):S85-S89. doi: 10.1016/j.mjafi.2020.11.007. Epub 2021 Feb 2.

Abstract

BACKGROUND

The item analysis of multiple choice questions (MCQs) is an essential tool that can provide input on validity and reliability of items. It helps to identify items which can be revised or discarded, thus building a quality MCQ bank.

METHODS

The study focussed on item analysis of 90 MCQs of three tests conducted for 150 first year Bachelor of Medicine and Bachelor of Surgery (MBBS) physiology students. The item analysis explored the difficulty index (DIF I) and discrimination index (DI) with distractor effectiveness (DE). Statistical analysis was performed by using MS Excel 2010 and SPSS, version 20.0.

RESULTS

Of total 90 MCQs, the majority, that is, 74 (82%) MCQs had a good/acceptable level of difficulty with a mean DIF I of 55.32 ± 7.4 (mean ± SD), whereas seven (8%) were too difficult and nine (10%) were too easy. A total of 72 (80%) items had an excellent to acceptable DI and 18 (20%) had a poor DI with an overall mean DI of 0.31 ± 0.12. There was significant weak correlation between DIF I and DI (r = 0.140, p < .0001). The mean DE was 32.35 ± 31.3 with 73% functional distractors in all. The reliability measure of test items by Cronbach alpha was 0.85 and Kuder-Richardson Formula 20 was 0.71, which is good. The standard error of measurement was 1.22.

CONCLUSION

Our study helped teachers identify good and ideal MCQs which can be part of the question bank for future and those MCQs which needed revision. We recommend that item analysis must be performed for all MCQ-based assessments to determine validity and reliability of the assessment.

摘要

背景

多项选择题(MCQ)的项目分析是一种重要工具,可为试题的效度和信度提供依据。它有助于识别可修改或舍弃的试题,从而建立高质量的MCQ题库。

方法

本研究聚焦于对150名医学学士和外科学士(MBBS)一年级生理学学生进行的三次测试中的90道MCQ进行项目分析。该项目分析探讨了难度指数(DIF I)、区分度指数(DI)以及干扰项有效性(DE)。使用MS Excel 2010和SPSS 20.0版进行统计分析。

结果

在总共90道MCQ中,大多数,即74道(82%)试题具有良好/可接受的难度水平,平均DIF I为55.32±7.4(均值±标准差),而7道(8%)试题太难,9道(10%)试题太容易。共有72道(80%)试题的区分度为优秀至可接受,18道(20%)试题的区分度较差,总体平均区分度为0.31±0.12。DIF I与DI之间存在显著的弱相关性(r = 0.140,p <.0001)。平均DE为32.35±31.3,所有干扰项中有73%起作用。用克朗巴哈α系数得出的试题信度测量值为0.85,库德-理查森公式20得出的值为0.71,信度良好。测量标准误为1.22。

结论

我们的研究帮助教师识别出可纳入未来题库的优质理想MCQ以及需要修改的MCQ。我们建议对所有基于MCQ的评估进行项目分析,以确定评估的效度和信度。

相似文献

5

引用本文的文献

9
The equation for medical multiple-choice question testing time estimation.医学多项选择题测试时间估算方程。
Ann Med Surg (Lond). 2024 Apr 4;86(5):2688-2695. doi: 10.1097/MS9.0000000000002010. eCollection 2024 May.

本文引用的文献

3
Student assessment: Moving over to programmatic assessment.学生评估:转向基于课程的评估。
Int J Appl Basic Med Res. 2016 Jul-Sep;6(3):149-50. doi: 10.4103/2229-516X.186955.
6
Assessment in medical education.医学教育中的评估。
N Engl J Med. 2007 Jan 25;356(4):387-96. doi: 10.1056/NEJMra054784.
8
Competency-based assessment: making it a reality.基于能力的评估:使其成为现实。
Med Teach. 2003 Nov;25(6):565-8. doi: 10.1080/0142159032000153842.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验