Suppr超能文献

不同组别和锚定项目的 DIF 分析。

DIF Analysis with Unknown Groups and Anchor Items.

机构信息

Department of Mathematics and Statistics, Lancaster University, Umeå, Sweden.

Department of Statistics, London School of Economics and Political Science, Columbia House, Room 5.16 Houghton Street, London, WC2A 2AE, UK.

出版信息

Psychometrika. 2024 Mar;89(1):267-295. doi: 10.1007/s11336-024-09948-7. Epub 2024 Feb 21.

Abstract

Ensuring fairness in instruments like survey questionnaires or educational tests is crucial. One way to address this is by a Differential Item Functioning (DIF) analysis, which examines if different subgroups respond differently to a particular item, controlling for their overall latent construct level. DIF analysis is typically conducted to assess measurement invariance at the item level. Traditional DIF analysis methods require knowing the comparison groups (reference and focal groups) and anchor items (a subset of DIF-free items). Such prior knowledge may not always be available, and psychometric methods have been proposed for DIF analysis when one piece of information is unknown. More specifically, when the comparison groups are unknown while anchor items are known, latent DIF analysis methods have been proposed that estimate the unknown groups by latent classes. When anchor items are unknown while comparison groups are known, methods have also been proposed, typically under a sparsity assumption - the number of DIF items is not too large. However, DIF analysis when both pieces of information are unknown has not received much attention. This paper proposes a general statistical framework under this setting. In the proposed framework, we model the unknown groups by latent classes and introduce item-specific DIF parameters to capture the DIF effects. Assuming the number of DIF items is relatively small, an -regularised estimator is proposed to simultaneously identify the latent classes and the DIF items. A computationally efficient Expectation-Maximisation (EM) algorithm is developed to solve the non-smooth optimisation problem for the regularised estimator. The performance of the proposed method is evaluated by simulation studies and an application to item response data from a real-world educational test.

摘要

确保调查问卷或教育测试等工具的公平性至关重要。一种解决方法是进行差异项目功能(DIF)分析,该分析检查不同的子组是否对特定项目有不同的反应,同时控制他们的整体潜在构建水平。DIF 分析通常用于评估项目水平的测量不变性。传统的 DIF 分析方法需要知道比较组(参考组和焦点组)和锚定项目(无 DIF 项目的子集)。这种先验知识并不总是可用的,并且已经提出了心理测量学方法来进行 DIF 分析,当一个信息未知时。更具体地说,当比较组未知而锚定项目已知时,已经提出了潜在 DIF 分析方法,这些方法通过潜在类别来估计未知组。当锚定项目未知而比较组已知时,也提出了方法,通常在稀疏性假设下 - DIF 项目的数量不是太大。然而,当两个信息都未知时,DIF 分析并没有得到太多关注。本文在这种情况下提出了一个通用的统计框架。在提出的框架中,我们通过潜在类别来模拟未知组,并引入项目特定的 DIF 参数来捕获 DIF 效应。假设 DIF 项目的数量相对较少,我们提出了一个 -正则化估计器来同时识别潜在类别和 DIF 项目。开发了一种计算效率高的期望最大化(EM)算法来解决正则化估计器的非光滑优化问题。通过模拟研究和对现实教育测试中项目反应数据的应用评估了所提出方法的性能。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c702/11062998/347fb71ac4c6/11336_2024_9948_Fige_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验