Division of Global Mental Health, Department of Psychiatry and Behavioral Sciences, George Washington University, 2120 L St NW, Suite 600, Washington, D.C, 20037, USA.
Division of Global Mental Health, Department of Psychiatry and Behavioral Sciences, George Washington University, 2120 L St NW, Suite 600, Washington, D.C, 20037, USA.
Behav Res Ther. 2020 Jul;130:103531. doi: 10.1016/j.brat.2019.103531. Epub 2019 Dec 14.
A major challenge in scaling-up psychological interventions worldwide is how to evaluate competency among new workforces engaged in psychological services. One approach to measuring competency is through standardized role plays. Role plays have the benefits of standardization and reliance on observed behavior rather than written knowledge. However, role plays are also resource intensive and dependent upon inter-rater reliability. We undertook a two-part scoping review to describe how competency is conceptualized in studies evaluating the relationship of competency with client outcomes. We focused on use of role plays including achieving inter-rater reliability and the association with client outcomes. First, we identified 4 reviews encompassing 61 studies evaluating the association of competency with client outcomes. Second, we identified 39 competency evaluation tools, of which 21 were used in comparisons with client outcomes. Inter-rater reliability (intraclass correlation coefficient) was reported for 15 tools and ranged from 0.53 to 0.96 (mean ICC = 0.77). However, we found that none of the outcome comparison studies measured competency with standardized role plays. Instead, studies typically used therapy quality (i.e., session ratings with actual clients) as a proxy for competency. This reveals a gap in the evidence base for competency and its role in predicting client outcomes. We therefore propose a competency research agenda to develop an evidence-base for objective, standardized role plays to measure competency and its association with client outcomes. OPEN SCIENCE REGISTRATION #: https://osf.io/nqhu7/.
在全球范围内扩大心理干预规模的主要挑战之一是如何评估从事心理服务的新劳动力的能力。衡量能力的一种方法是通过标准化角色扮演。角色扮演具有标准化和依赖观察行为而不是书面知识的优点。然而,角色扮演也需要大量资源,并依赖于评分者间的可靠性。我们进行了一项两部分的范围审查,以描述在评估能力与客户结果之间关系的研究中,能力是如何被概念化的。我们专注于使用角色扮演,包括实现评分者间的可靠性以及与客户结果的关联。首先,我们确定了 4 项综述,其中包含 61 项评估能力与客户结果之间关联的研究。其次,我们确定了 39 种能力评估工具,其中 21 种工具用于与客户结果进行比较。15 种工具报告了评分者间的可靠性(组内相关系数),范围从 0.53 到 0.96(平均 ICC=0.77)。然而,我们发现没有一项结果比较研究使用标准化角色扮演来衡量能力。相反,研究通常使用治疗质量(即与实际客户的会话评分)作为能力的替代指标。这揭示了能力及其在预测客户结果中的作用的证据基础中的一个空白。因此,我们提出了一个能力研究议程,以开发一个基于客观、标准化角色扮演来衡量能力及其与客户结果之间关系的证据基础。开放科学注册号:https://osf.io/nqhu7/。