Department of Psychology.
Psychol Assess. 2020 Jul;32(7):623-634. doi: 10.1037/pas0000818. Epub 2020 Apr 2.
The Reading the Mind in the Eyes task (RMET; Baron-Cohen, Wheelwright, Hill, Raste, & Plumb, 2001) is commonly used to assess theory of mind abilities in adults. In the task, participants pair one of four mental state descriptors with a picture of the eye region of a face. The items have varying emotional valence, and nearly 100 studies have examined whether performance on this task varies with item valence. However, efforts to address this question have been hampered by cross-study inconsistencies in how item valence is assessed. Thus, the goal of this study was to establish reference ratings for the valence of RMET items. In Study 1, we recorded valence ratings for each RMET item with a large sample of raters ( = 164). We illustrated how valence categories are essentially arbitrary and largely influenced by sample size. In addition, valence ratings were continuously distributed, further questioning the validity of imposing categorical distinctions. In Study 2, we used an archival dataset to demonstrate how the different categorization schemes resulted in conflicting conclusions about the association between item valence and RMET performance. However, when we examined the association between item valence and performance in a continuous manner, a clear U-shaped pattern emerged: Items that had more extreme valence ratings (negative or positive) were associated with better performance than items with more neutral ratings. We conclude that using the item valence ratings we report, and treating item valence as a continuous rather than categorical predictor, will help bring consistency to the study of the association between item valence and performance in the RMET. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
阅读眼睛中的思想任务(RMET;Baron-Cohen、Wheelwright、Hill、Raste 和 Plumb,2001)通常用于评估成人的心理理论能力。在任务中,参与者将四个心理状态描述符之一与面部眼睛区域的图片配对。这些项目具有不同的情绪效价,近 100 项研究已经检验了在这项任务上的表现是否随项目效价而变化。然而,由于评估项目效价的跨研究不一致,解决这个问题的努力受到了阻碍。因此,本研究的目的是为 RMET 项目的效价建立参考评级。在研究 1 中,我们用大量的评分者(n=164)记录了每个 RMET 项目的效价评分。我们说明了效价类别本质上是任意的,并且主要受到样本大小的影响。此外,效价评分是连续分布的,进一步质疑了强加分类区别的有效性。在研究 2 中,我们使用档案数据集来说明不同的分类方案如何导致关于项目效价与 RMET 表现之间关联的矛盾结论。然而,当我们以连续的方式检查项目效价与表现之间的关联时,出现了一个明显的 U 形模式:效价评分更极端(负面或正面)的项目与表现更好相关,而效价评分更中性的项目则与表现较差相关。我们得出结论,使用我们报告的项目效价评分,并将项目效价视为连续而不是分类预测因子,将有助于使 RMET 中项目效价与表现之间关联的研究更加一致。(PsycInfo 数据库记录(c)2020 APA,保留所有权利)。