Huang Qi Helen, Bolt Daniel M
University of Wisconsin-Madison, USA.
Appl Psychol Meas. 2022 Jun;46(4):303-320. doi: 10.1177/01466216221084207. Epub 2022 Apr 15.
Binary examinee mastery/nonmastery classifications in cognitive diagnosis models may often be an approximation to proficiencies that are better regarded as continuous. Such misspecification can lead to inconsistencies in the operational definition of "mastery" when binary skills models are assumed. In this paper we demonstrate the potential for an interpretational confounding of the latent skills when truly continuous skills are treated as binary. Using the DINA model as an example, we show how such forms of confounding can be observed through item and/or examinee parameter change when (1) different collections of items (such as representing different test forms) previously calibrated separately are subsequently calibrated together; and (2) when structural restrictions are placed on the relationships among skill attributes (such as the assumption of strictly nonnegative growth over time), among other possibilities. We examine these occurrences in both simulation and real data studies. It is suggested that researchers should regularly attend to the potential for interpretational confounding by studying differences in attribute mastery proportions and/or changes in item parameter (e.g., slip and guess) estimates attributable to skill continuity when the same samples of examinees are administered different test forms, or the same test forms are involved in different calibrations.
认知诊断模型中二元考生掌握/未掌握分类往往可能是对熟练度的一种近似,而熟练度更宜被视为连续的。当采用二元技能模型时,这种错误设定可能会导致“掌握”操作定义的不一致。在本文中,我们展示了将真正连续的技能视为二元技能时,潜在技能可能存在解释性混淆。以DINA模型为例,我们展示了在以下情况时,如何通过项目和/或考生参数变化观察到这种混淆形式:(1) 先前分别校准的不同项目集合(如代表不同测试形式)随后一起校准时;以及(2) 当对技能属性之间的关系施加结构限制时(如假设随时间严格非负增长)等情况。我们在模拟研究和实际数据研究中都考察了这些情况。建议研究人员在对相同考生样本施测不同测试形式,或相同测试形式参与不同校准时,通过研究属性掌握比例的差异和/或归因于技能连续性的项目参数(如失误和猜测)估计值的变化,定期关注解释性混淆的可能性。