Suppr超能文献

DART 工具的初步研究——一种客观的医疗保健模拟讲评评估工具。

Pilot study of the DART tool - an objective healthcare simulation debriefing assessment instrument.

机构信息

Sydney Medical School, Westmead Hospital, Block K, Level 6, Sydney, NSW, 2145, Australia.

Simulated Learning Environment for Clinical Training (SiLECT), Westmead Hospital, Sydney, NSW, 2145, Australia.

出版信息

BMC Med Educ. 2022 Aug 22;22(1):636. doi: 10.1186/s12909-022-03697-w.

Abstract

BACKGROUND

Various rating tools aim to assess simulation debriefing quality, but their use may be limited by complexity and subjectivity. The Debriefing Assessment in Real Time (DART) tool represents an alternative debriefing aid that uses quantitative measures to estimate quality and requires minimal training to use. The DART is uses a cumulative tally of instructor questions (IQ), instructor statements (IS) and trainee responses (TR). Ratios for IQ:IS and TR:[IQ + IS] may estimate the level of debriefer inclusivity and participant engagement.

METHODS

Experienced faculty from four geographically disparate university-affiliated simulation centers rated video-based debriefings and a transcript using the DART. The primary endpoint was an assessment of the estimated reliability of the tool. The small sample size confined analysis to descriptive statistics and coefficient of variations (CV%) as an estimate of reliability.

RESULTS

Ratings for Video A (n = 7), Video B (n = 6), and Transcript A (n = 6) demonstrated mean CV% for IQ (27.8%), IS (39.5%), TR (34.8%), IQ:IS (40.8%), and TR:[IQ + IS] (28.0%). Higher CV% observed in IS and TR may be attributable to rater characterizations of longer contributions as either lumped or split. Lower variances in IQ and TR:[IQ + IS] suggest overall consistency regardless of scores being lumped or split.

CONCLUSION

The DART tool appears to be reliable for the recording of data which may be useful for informing feedback to debriefers. Future studies should assess reliability in a wider pool of debriefings and examine potential uses in faculty development.

摘要

背景

各种评分工具旨在评估模拟讲评质量,但由于其复杂性和主观性,其使用可能受到限制。实时讲评评估(DART)工具是一种替代的讲评辅助工具,它使用定量指标来估计质量,并且使用起来需要很少的培训。DART 使用讲师问题(IQ)、讲师陈述(IS)和学员回答(TR)的累计计数。IQ:IS 和 TR:[IQ + IS] 的比值可用于估计讲评者包容性和参与者参与度的水平。

方法

来自四个地理位置不同的大学附属模拟中心的经验丰富的教师使用 DART 对基于视频的讲评和记录进行评分。主要终点是评估工具的估计可靠性。由于样本量小,分析仅限于描述性统计和变异系数(CV%)作为可靠性的估计。

结果

对视频 A(n=7)、视频 B(n=6)和记录 A(n=6)的评分显示,IQ 的平均 CV%(27.8%)、IS(39.5%)、TR(34.8%)、IQ:IS(40.8%)和 TR:[IQ+IS](28.0%)。IS 和 TR 中观察到的较高 CV%可能归因于评分者对较长贡献的分类为集中或分散。IQ 和 TR:[IQ+IS] 的方差较小,表明无论得分是集中还是分散,整体一致性都很高。

结论

DART 工具似乎可用于记录数据,这可能有助于为讲评者提供反馈。未来的研究应在更广泛的讲评中评估可靠性,并研究在教师发展中的潜在用途。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d2a8/9394081/6ac3327be9b5/12909_2022_3697_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验