Suppr超能文献

有效利用项目分析提高本科医学考试的信度和效度:多年来对同一考试进行评估:一种不同的方法。

Effective use of Item Analysis to improve the Reliability and Validity of Undergraduate Medical Examinations: Evaluating the same exam over many years: a different approach.

作者信息

Zubairi Nadeem Alam, AlAhmadi Turki Saad, Ibrahim Mohamed Hesham, Hegazi Moustafa Abdelaal, Gadi Fahad Ussif

机构信息

Nadeem Alam Zubairi, Department of Pediatrics, Faculty of Medicine, King Abdulaziz University, Rabigh, Saudi Arabia.

Turki Saad AlAhmadi, Department of Pediatrics, Faculty of Medicine, King Abdulaziz University, Rabigh, Saudi Arabia.

出版信息

Pak J Med Sci. 2025 Mar;41(3):810-815. doi: 10.12669/pjms.41.3.10693.

Abstract

OBJECTIVE

MCQ exams are part of end-module assessments in undergraduate medical institutions. Item Analysis (IA) is the best tool to check their reliability and validity. It provides the Reliability Coefficient KR20, Difficulty Index (DI), Discrimination Index (DISC), and Distractor Efficiency (DE). Almost all research papers on IA are based on single exam analysis. We examined the IA of multiple exams of the same module, taken in four years. We aimed to explore the required consistency over the years and the effectiveness of IA-based post-exam measures.

METHODOLOGY

Item Analysis of eight final MCQ exams of the Pediatric module from 2020-21 to 2023-24, at the Faculty of Medicine in Rabigh, King Abdulaziz University, Saudi Arabia, were included in the study.

RESULTS

All exams had KR20 of 90 and above indicating excellent reliability. Difficulty levels were consistent except for a single year. Discriminative ability was maintained over the years. Only 28 out of 800 MCQs had a negative DISC. All exams maintained good DE. Only 15 MCQs over four years had zero DE. The practice of reviewing all Non-Functional Distractors yielded a gradual improvement in exam quality.

CONCLUSION

Besides the IA of individual exams, it is also recommended that IA of the same exam be evaluated over 4-5 years to see consistency and trends towards improvement. It helps in improving the reliability and validity by addressing deficiencies and deviations from the recommended standards.

摘要

目的

多项选择题考试是本科医学院校模块结业评估的一部分。项目分析(IA)是检验其信度和效度的最佳工具。它能提供信度系数KR20、难度指数(DI)、区分指数(DISC)和干扰项效率(DE)。几乎所有关于项目分析的研究论文都基于单次考试分析。我们对同一模块在四年内进行的多次考试进行了项目分析。我们旨在探究多年来所需的一致性以及基于项目分析的考后措施的有效性。

方法

本研究纳入了沙特阿拉伯阿卜杜勒阿齐兹国王大学拉比格医学院2020 - 2021学年至2023 - 2024学年儿科模块的八次期末多项选择题考试的项目分析。

结果

所有考试的KR20均在90及以上,表明信度极佳。除了某一年外,难度水平保持一致。多年来区分能力得以维持。800道多项选择题中只有28道的区分指数为负。所有考试的干扰项效率良好。四年中只有15道多项选择题的干扰项效率为零。审查所有无功能干扰项的做法使考试质量逐渐提高。

结论

除了对单次考试进行项目分析外,还建议对同一考试在4至5年内的项目分析进行评估,以查看一致性和改进趋势。通过解决不足之处以及与推荐标准的偏差,这有助于提高信度和效度。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a3ee/11911747/cd2706a1bdf0/PJMS-41-810-g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验