Suppr超能文献

拓展凯恩基于论证的有效性框架:语言评估中的验证实践可为健康专业教育带来什么?

Expanding Kane's argument-based validity framework: What can validation practices in language assessment offer health professions education?

机构信息

UCL Institute of Education, University College London, London, UK.

Faculty of Pharmacy and Pharmaceutical Sciences, Monash University, Parkville, Melbourne, Victoria, Australia.

出版信息

Med Educ. 2024 Dec;58(12):1462-1468. doi: 10.1111/medu.15452. Epub 2024 Jun 13.

Abstract

CONTEXT

One central consideration in health professions education (HPE) is to ensure we are making sound and justifiable decisions based on the assessment instruments we use on health professionals. To achieve this goal, HPE assessment researchers have drawn on Kane's argument-based framework to ascertain the validity of their assessment tools. However, the original four-inference model proposed by Kane - frequently used in HPE validation research - has its limitations in terms of what each inference entails and what claims and sources of backing are housed in each inference. The under-specification in the four-inference model has led to inconsistent practices in HPE validation research, posing challenges for (i) researchers who want to evaluate the validity of different HPE assessment tools and/or (ii) researchers who are new to test validation and need to establish a coherent understanding of argument-based validation.

METHODS

To address these identified concerns, this article introduces the expanded seven-inference argument-based validation framework that is established practice in the field of language testing and assessment (LTA). We explicate (i) why LTA researchers experienced the need to further specify the original four Kanean inferences; (ii) how LTA validation research defines each of their seven inferences and (iii) what claims, assumptions and sources of backing are associated with each inference. Sampling six representative validation studies in HPE, we demonstrate why an expanded model and a shared disciplinary validation framework can facilitate the examination of the validity evidence in diverse HPE validation contexts.

CONCLUSIONS

We invite HPE validation researchers to experiment with the seven-inference argument-based framework from LTA to evaluate its usefulness to HPE. We also call for greater interdisciplinary dialogue between HPE and LTA since both disciplines share many fundamental concerns about language use, communication skills, assessment practices and validity in assessment instruments.

摘要

背景

在健康专业教育(HPE)中,一个核心考虑因素是确保我们根据使用的健康专业人员评估工具做出合理且合理的决策。为了实现这一目标,HPE 评估研究人员借鉴了凯恩的基于论证的框架,以确定其评估工具的有效性。然而,凯恩提出的原始四推理模型——在 HPE 验证研究中经常使用——在每个推理所涉及的内容以及每个推理中包含的主张和支持来源方面存在局限性。四推理模型的欠规范导致了 HPE 验证研究中的不一致做法,给(i)希望评估不同 HPE 评估工具有效性的研究人员和/或(ii)新涉足测试验证且需要建立基于论证的验证连贯理解的研究人员带来了挑战。

方法

为了解决这些已确定的问题,本文介绍了在语言测试和评估(LTA)领域已确立实践的扩展的七推理基于论证的验证框架。我们详细说明了(i)为什么 LTA 研究人员需要进一步具体说明原始的四个凯恩推理;(ii)LTA 验证研究如何定义其七个推理中的每一个,以及(iii)与每个推理相关的主张、假设和支持来源。我们选择了六个有代表性的 HPE 验证研究进行抽样,展示了为什么扩展模型和共享学科验证框架可以促进在不同的 HPE 验证背景下检查有效性证据。

结论

我们邀请 HPE 验证研究人员尝试使用来自 LTA 的七推理基于论证的框架,以评估其对 HPE 的有用性。我们还呼吁 HPE 和 LTA 之间加强跨学科对话,因为这两个学科都对语言使用、沟通技巧、评估实践和评估工具中的有效性有许多共同关注。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验