Department of Internal Medicine, College of Medicine, University of Kentucky, Lexington, Kentucky, USA.
Med Educ. 2010 Apr;44(4):379-86. doi: 10.1111/j.1365-2923.2009.03612.x.
Evaluations in the clinical arena are fraught with problems. Current assessments of clinical teaching typically measure attributes of clinical teachers in overly broad terms, are often subjective and often succumb to the halo effect. This is in contradistinction to measurements of lectures, workshops or online educational content, which can more readily be assessed using objective criteria. As a result, clinical evaluations are often insufficient to provide focused feedback, guide faculty development or identify specific areas for clinical teachers to implement change and improvement. The aim of our study was to offset these limitations.
We developed a structured, 15-item objective structured clinical examination (OSCE)-type checklist of discrete teaching behaviours intended to be: (i) observable; (ii) applicable to multiple disciplines, and (iii) reliably identifiable. Our goal was to test and utilise this checklist as an objective assessment of clinical teaching across a range of in-patient teaching rounds experiences. During 2007-2008, pairs of external raters on two separate occasions observed nine attending physicians during actual in-patient paediatrics and internal medicine ward rounds at a large, academic medical centre. Observers documented the extent to which specific teaching behaviours did or did not occur.
The internal consistency of the 15-item checklist was good (alpha = 0.85). A two-facet, partially nested G study found the generalisability of ratings to be generally acceptable, but inter-rater reliability varied greatly between occasions and across individual checklist items.
Despite attempts to identify discrete and observable target behaviours, placing observers on rounds to detect these behaviours may not be as straightforward as it would seem. Clinical teaching may be a more inherently subjective process, based on different teaching styles of faculty staff. However, a set of objective checklist items to be completed by trained observers on teaching rounds holds promise as a potentially viable means of identifying strengths and weaknesses of clinical instruction. Further research is needed to define what constitutes quality clinical teaching, as well as the most reliable method for assessing it.
临床评估充满了问题。目前对临床教学的评估通常以过于宽泛的术语来衡量临床教师的属性,往往是主观的,而且往往容易受到晕轮效应的影响。这与讲座、研讨会或在线教育内容的评估形成鲜明对比,后者可以更方便地使用客观标准进行评估。因此,临床评估往往不足以提供有针对性的反馈、指导教师发展,或确定临床教师需要实施变革和改进的具体领域。我们的研究目的是克服这些局限性。
我们开发了一种结构化的、包含 15 个项目的客观结构化临床考试(OSCE)式检查表,用于记录离散的教学行为,这些行为旨在:(i)可观察;(ii)适用于多个学科;(iii)可可靠识别。我们的目标是测试并利用该检查表作为对各种住院教学轮次经验的临床教学的客观评估。在 2007-2008 年期间,两名外部评估员在两次独立的机会中,在一家大型学术医疗中心观察了 9 名主治医生在儿科和内科病房轮次中的实际情况。观察者记录了特定教学行为发生或未发生的程度。
15 项检查表的内部一致性良好(alpha = 0.85)。一项两方面、部分嵌套的 G 研究发现,评分的可概括性通常可以接受,但在不同场合和个别检查表项目之间,评估者之间的可靠性差异很大。
尽管尝试确定离散且可观察的目标行为,但让观察者在轮次中发现这些行为可能并不像看起来那么简单。临床教学可能是一个更具主观性的过程,基于教师的不同教学风格。然而,一套由经过培训的观察者在教学轮次中完成的客观检查表项目具有作为识别临床指导的优势和劣势的潜在可行手段的前景。需要进一步研究来定义什么构成了高质量的临床教学,以及评估它的最可靠方法。