Suppr超能文献

课程总结性评估能否预测临床能力?一项系统评价。

Do coursework summative assessments predict clinical performance? A systematic review.

作者信息

Terry Rebecca, Hing Wayne, Orr Robin, Milne Nikki

机构信息

Physiotherapy Program, Faculty of Health Sciences and Medicine, Bond University, Gold Coast, 4226, Australia.

出版信息

BMC Med Educ. 2017 Feb 16;17(1):40. doi: 10.1186/s12909-017-0878-3.

Abstract

BACKGROUND

Two goals of summative assessment in health profession education programs are to ensure the robustness of high stakes decisions such as progression and licensing, and predict future performance. This systematic and critical review aims to investigate the ability of specific modes of summative assessment to predict the clinical performance of health profession education students.

METHODS

PubMed, CINAHL, SPORTDiscus, ERIC and EMBASE databases were searched using key terms with articles collected subjected to dedicated inclusion criteria. Rigorous exclusion criteria were applied to ensure a consistent interpretation of 'summative assessment' and 'clinical performance'. Data were extracted using a pre-determined format and papers were critically appraised by two independent reviewers using a modified Downs and Black checklist with level of agreement between reviewers determined through a Kappa analysis.

RESULTS

Of the 4783 studies retrieved from the search strategy, 18 studies were included in the final review. Twelve were from the medical profession and there was one from each of physiotherapy, pharmacy, dietetics, speech pathology, dentistry and dental hygiene. Objective Structured Clinical Examinations featured in 15 papers, written assessments in four and problem based learning evaluations, case based learning evaluations and student portfolios each featured in one paper. Sixteen different measures of clinical performance were used. Two papers were identified as 'poor' quality and the remainder categorised as 'fair' with an almost perfect (k = 0.852) level of agreement between raters. Objective Structured Clinical Examination scores accounted for 1.4-39.7% of the variance in student performance; multiple choice/extended matching questions and short answer written examinations accounted for 3.2-29.2%; problem based or case based learning evaluations accounted for 4.4-16.6%; and student portfolios accounted for 12.1%.

CONCLUSIONS

Objective structured clinical examinations and written examinations consisting of multiple choice/extended matching questions and short answer questions do have significant relationships with the clinical performance of health professional students. However, caution should be applied if using these assessments as predictive measures for clinical performance due to a small body of evidence and large variations in the predictive strength of the relationships identified. Based on the current evidence, the Objective Structured Clinical Examination may be the most appropriate summative assessment for educators to use to identify students that may be at risk of poor performance in a clinical workplace environment. Further research on this topic is needed to improve the strength of the predictive relationship.

摘要

背景

卫生专业教育项目中总结性评估的两个目标是确保诸如升学和执照颁发等高风险决策的稳健性,并预测未来表现。本系统综述旨在调查特定总结性评估模式预测卫生专业教育学生临床能力的能力。

方法

使用关键词检索PubMed、CINAHL、SPORTDiscus、ERIC和EMBASE数据库,收集的文章需符合专门的纳入标准。应用严格的排除标准以确保对“总结性评估”和“临床能力”的一致解读。使用预先确定的格式提取数据,由两名独立评审员使用修改后的唐斯和布莱克清单对论文进行严格评估,通过卡方分析确定评审员之间的一致性水平。

结果

从检索策略中检索到的4783项研究中,18项研究被纳入最终综述。其中12项来自医学专业,物理治疗、药学、饮食学、言语病理学、牙科和口腔卫生学各有1项。15篇论文涉及客观结构化临床考试,4篇涉及书面评估,基于问题的学习评估、基于案例的学习评估和学生档案袋各有1篇论文涉及。使用了16种不同的临床能力衡量方法。两篇论文被判定为“质量差”,其余论文被归类为“质量一般”,评审员之间的一致性水平几乎完美(κ = 0.852)。客观结构化临床考试成绩占学生表现差异的1.4% - 39.7%;多项选择题/扩展匹配题和简答题书面考试占3.2% - 29.2%;基于问题或基于案例的学习评估占4.4% - 16.6%;学生档案袋占12.1%。

结论

客观结构化临床考试以及由多项选择题/扩展匹配题和简答题组成的书面考试与卫生专业学生的临床能力确实存在显著关联。然而,由于证据有限且所确定关系的预测强度差异较大,若将这些评估用作临床能力的预测指标时应谨慎。基于现有证据,客观结构化临床考试可能是教育工作者用于识别在临床工作环境中可能表现不佳风险学生的最合适的总结性评估。需要对该主题进行进一步研究以提高预测关系的强度。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b3eb/5314623/5059ac7d06c5/12909_2017_878_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验