Mazor Kathleen M, Canavan Colleen, Farrell Margaret, Margolis Melissa J, Clauser Brian E
Acad Med. 2008 Oct;83(10 Suppl):S9-12. doi: 10.1097/ACM.0b013e318183e329.
This study investigated whether participants' subjective reports of how they assigned ratings on a multisource feedback instrument provide evidence to support interpreting the resulting scores as objective, accurate measures of professional behavior.
Twenty-six participants completed think-aloud interviews while rating students, residents, or faculty members they had worked with previously. The items rated included 15 behavioral items and one global item.
Participants referred to generalized behaviors and global impressions six times as often as specific behaviors, rated observees in the absence of information necessary to do so, relied on indirect evidence about performance, and varied in how they interpreted items.
Behavioral change becomes difficult to address if it is unclear what behaviors raters considered when providing feedback. These findings highlight the importance of explicitly stating and empirically investigating the assumptions that underlie the use of an observational assessment tool.
本研究调查了参与者关于他们如何在多源反馈工具上进行评分的主观报告是否能提供证据,以支持将所得分数解释为对专业行为的客观、准确衡量。
26名参与者在对他们之前共事过的学生、住院医师或教员进行评分时完成了出声思考访谈。所评项目包括15个行为项目和1个总体项目。
参与者提及一般性行为和总体印象的频率是具体行为的六倍,在缺乏必要信息的情况下对被观察者进行评分,依赖关于表现的间接证据,并且在对项目的解释上存在差异。
如果不清楚评分者在提供反馈时考虑了哪些行为,行为改变就难以解决。这些发现凸显了明确阐述并实证研究观察性评估工具使用背后假设的重要性。