Shappell Eric, Podolej Gregory, Ahn James, Tekian Ara, Park Yoon Soo
Department of Emergency Medicine, 1811Harvard Medical School, Massachusetts General Hospital, Boston, MA, USA.
Department of Emergency Medicine, University of Illinois at Peoria, IL, USA.
Eval Health Prof. 2021 Sep;44(3):315-318. doi: 10.1177/0163278720908914. Epub 2020 Mar 4.
Mastery learning assessments have been described in simulation-based educational interventions; however, studies applying mastery learning to multiple-choice tests (MCTs) are lacking. This study investigates an approach to item generation and standard setting for mastery learning MCTs and evaluates the consistency of learner performance across sequential tests. Item models, variables for question stems, and mastery standards were established using a consensus process. Two test forms were created using item models. Tests were administered at two training programs. The primary outcome, the test-retest consistency of pass-fail decisions across versions of the test, was 94% (κ = .54). Decision-consistency classification was .85. Item-level consistency was 90% (κ = .77, = .03). These findings support the use of automatic item generation to create mastery MCTs which produce consistent pass-fail decisions. This technique broadens the range of assessment methods available to educators that require serial MCT testing, including mastery learning curricula.
掌握学习评估已在基于模拟的教育干预中有所描述;然而,将掌握学习应用于多项选择题测试(MCT)的研究却很缺乏。本研究调查了一种用于掌握学习MCT的试题生成和标准设定方法,并评估了学习者在连续测试中的表现一致性。通过共识过程建立了试题模型、题干变量和掌握标准。使用试题模型创建了两种测试形式。在两个培训项目中进行了测试。主要结果是,不同版本测试中通过-失败决策的重测一致性为94%(κ = 0.54)。决策一致性分类为0.85。试题层面的一致性为90%(κ = 0.77,σ = 0.03)。这些发现支持使用自动试题生成来创建掌握学习MCT,从而产生一致的通过-失败决策。这种技术拓宽了教育工作者可用于需要连续MCT测试的评估方法范围,包括掌握学习课程。