Suppr超能文献

用于基于模拟的医学教育的实时汇报评估(DART)工具。

The Debriefing Assessment in Real Time (DART) tool for simulation-based medical education.

作者信息

Baliga Kaushik, Halamek Louis P, Warburton Sandra, Mathias Divya, Yamada Nicole K, Fuerch Janene H, Coggins Andrew

机构信息

Sydney Medical School, Westmead Hospital, Block K, Level 6, Sydney, NSW, 2145, Australia.

Division of Neonatal and Developmental Medicine, Department of Pediatrics, Stanford University School of Medicine, Palo Alto, CA, USA.

出版信息

Adv Simul (Lond). 2023 Mar 14;8(1):9. doi: 10.1186/s41077-023-00248-1.

Abstract

BACKGROUND

Debriefing is crucial for enhancing learning following healthcare simulation. Various validated tools have been shown to have contextual value for assessing debriefers. The Debriefing Assessment in Real Time (DART) tool may offer an alternative or additional assessment of conversational dynamics during debriefings.

METHODS

This is a multi-method international study investigating reliability and validity. Enrolled raters (n = 12) were active simulation educators. Following tool training, the raters were asked to score a mixed sample of debriefings. Descriptive statistics are recorded, with coefficient of variation (CV%) and Cronbach's α used to estimate reliability. Raters returned a detailed reflective survey following their contribution. Kane's framework was used to construct validity arguments.

RESULTS

The 8 debriefings (μ = 15.4 min (SD 2.7)) included 45 interdisciplinary learners at various levels of training. Reliability (mean CV%) for key components was as follows: instructor questions μ = 14.7%, instructor statements μ = 34.1%, and trainee responses μ = 29.0%. Cronbach α ranged from 0.852 to 0.978 across the debriefings. Post-experience responses suggested that DARTs can highlight suboptimal practices including unqualified lecturing by debriefers.

CONCLUSION

The DART demonstrated acceptable reliability and may have a limited role in assessment of healthcare simulation debriefing. Inherent complexity and emergent properties of debriefing practice should be accounted for when using this tool.

摘要

背景

总结汇报对于提高医疗模拟后的学习效果至关重要。各种经过验证的工具已被证明在评估总结汇报者方面具有情境价值。实时总结汇报评估(DART)工具可能为总结汇报期间的对话动态提供另一种或额外的评估。

方法

这是一项调查可靠性和有效性的多方法国际研究。登记的评分者(n = 12)是活跃的模拟教育工作者。在进行工具培训后,要求评分者对总结汇报的混合样本进行评分。记录描述性统计数据,使用变异系数(CV%)和克朗巴哈α系数来估计可靠性。评分者在做出贡献后返回一份详细的反思性调查问卷。使用凯恩的框架构建有效性论证。

结果

8次总结汇报(μ = 15.4分钟(标准差2.7))包括45名处于不同培训水平的跨学科学习者。关键组成部分的可靠性(平均CV%)如下:教师提问μ = 14.7%,教师陈述μ = 34.1%,学员回答μ = 29.0%。在所有总结汇报中,克朗巴哈α系数范围为0.852至0.978。经验后的反馈表明,DARTs可以突出次优做法,包括总结汇报者不合格的讲授。

结论

DART表现出可接受的可靠性,在医疗模拟总结汇报评估中可能作用有限。使用该工具时应考虑总结汇报实践的内在复杂性和涌现特性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/410c/10015941/826b8e798aa1/41077_2023_248_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验