Department of Psychological Sciences, Birkbeck, University of London, London, UK.
Wellcome Centre for Human Neuroimaging, University College London (UCL), London, UK.
Q J Exp Psychol (Hove). 2023 Dec;76(12):2854-2864. doi: 10.1177/17470218231163007. Epub 2023 Mar 30.
It is often assumed that the recognition of facial expressions is impaired in autism. However, recent evidence suggests that reports of expression recognition difficulties in autistic participants may be attributable to co-occurring alexithymia-a trait associated with difficulties interpreting interoceptive and emotional states-not autism per se. Due to problems fixating on the eye-region, autistic individuals may be more reliant on information from the mouth region when judging facial expressions. As such, it may be easier to detect expression recognition deficits attributable to autism, not alexithymia, when participants are forced to base expression judgements on the eye-region alone. To test this possibility, we compared the ability of autistic participants (with and without high levels of alexithymia) and non-autistic controls to categorise facial expressions (a) when the whole face was visible, and (b) when the lower portion of the face was covered with a surgical mask. High-alexithymic autistic participants showed clear evidence of expression recognition difficulties: they correctly categorised fewer expressions than non-autistic controls. In contrast, low-alexithymic autistic participants were unimpaired relative to non-autistic controls. The same pattern of results was seen when judging masked and unmasked expression stimuli. In sum, we find no evidence for an expression recognition deficit attributable to autism, in the absence of high levels of co-occurring alexithymia, either when participants judge whole-face stimuli or just the eye-region. These findings underscore the influence of co-occurring alexithymia on expression recognition in autism.
人们通常认为自闭症患者对面部表情的识别能力受损。然而,最近的证据表明,自闭症患者在表情识别方面存在困难的报告可能归因于共病的述情障碍——一种与解释内感受和情绪状态的困难有关的特征,而不是自闭症本身。由于自闭症患者在注视眼睛区域时存在问题,他们在判断面部表情时可能更依赖于来自嘴部区域的信息。因此,当参与者被迫仅基于眼睛区域来做出表情判断时,可能更容易检测到归因于自闭症而非述情障碍的表情识别缺陷。为了验证这种可能性,我们比较了自闭症患者(有无高水平述情障碍)和非自闭症对照组在以下两种情况下识别面部表情的能力:(a) 整张脸可见时,以及 (b) 脸的下半部分被手术口罩覆盖时。高述情障碍的自闭症患者表现出明显的表情识别困难:他们正确分类的表情少于非自闭症对照组。相比之下,低述情障碍的自闭症患者与非自闭症对照组相比没有受到影响。在判断戴口罩和不戴口罩的表情刺激时,也出现了相同的结果模式。总之,我们发现,在没有共病高水平述情障碍的情况下,自闭症患者既没有归因于自闭症的表情识别缺陷,无论是在判断整个面部刺激还是仅仅是眼睛区域时。这些发现强调了共病述情障碍对自闭症患者表情识别的影响。