Suppr超能文献

基于视频的大规模跨专业学习活动中协作团队的同伴评估。

Video-based peer assessment of collaborative teamwork in a large-scale interprofessional learning activity.

机构信息

Division of Medicine, School of Medicine and Population Health, The University of Sheffield, Sheffield, UK.

Independent Scholar, Sydney, NSW, Australia.

出版信息

BMC Med Educ. 2024 Nov 14;24(1):1307. doi: 10.1186/s12909-024-06124-4.

Abstract

BACKGROUND

The assessment of team performance within large-scale Interprofessional Learning (IPL) initiatives is an important but underexplored area. It is essential for demonstrating the effectiveness of collaborative learning outcomes in preparing students for professional practice. Using Kane's validity framework, we investigated whether peer assessment of student-produced videos depicting collaborative teamwork in an IPL activity was sufficiently valid for decision-making about team performance, and where the sources of error might lie to optimise future iterations of the assessment.

METHODS

A large cohort of health professional students (n = 1218) of 8 differing professions was divided into teams containing 5-6 students. Each team collaborated on producing a short video to evidence their management of one of 12 complex patient cases. Students from two other teams, who had worked on the same case, individually rated each video using a previously developed assessment scale. A generalisability study quantified sources of error that impacted the reliability of peer assessment of collaborative teamwork. A decision study modeled the impact of differing numbers of raters. A modified Angoff determined the pass/fail mark.

RESULTS

Within a large-scale learning activity, peer assessment of collaborative teamwork was reliable (G = 0.71) based on scoring by students from two teams (n = 10-12) for each video. The main sources of variation were the stringency and subjectivity of fellow student assessors. Whilst professions marked with differing stringency, and individual student assessors had different views of the quality of a particular video, none of that individual assessor variance was attributable to the assessors' profession. Teams performed similarly across the 12 cases overall, and no particular professions marked differently on any particular case.

CONCLUSION

A peer assessment of a student-produced video depicting interprofessional collaborative teamwork around the management of complex patient cases can be valid for decision-making about student team performance. Further refining marking rubrics and student assessor training could potentially modify assessor subjectivity. The impact of professions on assessing individual peers and the case-specificity of team performances in IPL settings need further exploration. This innovative approach to assessment offers a promising avenue for enhancing the measurement of collaborative learning outcomes in large-scale Interprofessional learning initiatives.

摘要

背景

在大规模跨专业学习(IPL)计划中评估团队表现是一个重要但尚未充分探索的领域。这对于展示协作学习成果在为学生专业实践做准备方面的有效性至关重要。我们使用凯恩的有效性框架,研究了对学生制作的视频进行同行评估,以评估 IPL 活动中协作团队表现的方法是否足以用于团队表现的决策,以及误差的来源可能在哪里,以优化未来评估的迭代。

方法

一大群来自 8 个不同专业的卫生专业学生(n=1218)被分为包含 5-6 名学生的团队。每个团队合作制作一个短视频,以证明他们对 12 个复杂患者病例之一的管理。来自另外两个团队的学生,他们曾参与过同一个病例,分别使用之前开发的评估量表对每个视频进行评分。一项可推广性研究量化了影响协作团队同行评估可靠性的误差源。一项决策研究模拟了不同数量的评分者的影响。修改后的 Angoff 确定了及格/不及格分数。

结果

在大规模学习活动中,对协作团队的同行评估是可靠的(G=0.71),基于两个团队(n=10-12)的学生对每个视频的评分。主要的变化来源是同行评估者的严格程度和主观性。虽然不同专业的评分标准有所不同,个别学生评估者对特定视频的质量有不同的看法,但没有一个评估者的差异归因于评估者的专业。总的来说,各团队在 12 个病例中的表现相似,没有任何特定专业在任何特定病例中表现不同。

结论

对学生制作的描述跨专业协作团队围绕复杂患者病例管理的视频进行同行评估,可以为团队表现的决策提供有效依据。进一步细化评分标准和学生评估者培训可以潜在地改变评估者的主观性。专业对评估同行的影响以及 IPL 环境中团队表现的病例特异性需要进一步探索。这种创新的评估方法为增强大规模跨专业学习计划中协作学习成果的衡量提供了有前途的途径。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/17e8/11566248/ce911a5c952f/12909_2024_6124_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验