Atkinson Anthony P, Duran Nazire, Skraga Abigail, Winterbottom Anita, Wright Jack D
Department of Psychology, Durham University, Durham, UK.
https://orcid.org/0000-0002-2649-4569.
J Vis. 2025 Jul 1;25(8):4. doi: 10.1167/jov.25.8.4.
The precise contributions of foveal and extrafoveal visual processing to facial emotion recognition and to how individuals gaze at faces remain poorly understood. We used gaze-contingent masking and windowing to control foveal and extrafoveal inputs while observers (N = 35) classified the emotion (anger, disgust, fear, surprise, sadness) on face images. Emotion classification performance was substantially reduced by the absence of extrafoveal information but was unaffected by the absence of foveal information. Gaze decoding showed that fixation patterns discriminated viewed emotion categories regardless of whether either foveal or extrafoveal information was absent or both were present, more so when observers provided correct responses. Although fixations clustered around the eyes, nose, and upper mouth, emotion-specific biases in fixation densities aligned with regions previously identified as emotion diagnostic, and, for trials with incorrect responses, with locations informative of the most confused emotion. Even without extrafoveal information, necessitating top-down guidance of gaze, fixations were biased to these same emotion-informative regions. Yet, the spatiotemporal sequencing of fixations differed in the absence versus presence of extrafoveal information. Fixation patterns also predicted stimulus presentation conditions, most evident in differences due to the absence versus presence of extrafoveal rather than foveal inputs. Thus, where one looks on a face impacts the ability to determine its emotional expression, not only via the higher resolving power of foveal vision but also by the extrafoveal extraction of task-relevant information and guidance of gaze, and possibly also via the interplay between foveal and extrafoveal vision that underpins presaccadic attention.
中央凹和中央凹外视觉处理对面部情绪识别以及个体注视面部方式的具体贡献仍知之甚少。我们使用注视点相关的掩蔽和开窗技术来控制中央凹和中央凹外的输入,同时让观察者(N = 35)对面部图像的情绪(愤怒、厌恶、恐惧、惊讶、悲伤)进行分类。缺少中央凹外信息会大幅降低情绪分类表现,但缺少中央凹信息则不会产生影响。注视解码显示,无论中央凹或中央凹外信息缺失与否,还是两者都存在,注视模式都能区分所观察的情绪类别,当观察者给出正确反应时更是如此。尽管注视点集中在眼睛、鼻子和上嘴唇周围,但注视密度的情绪特异性偏差与先前确定为情绪诊断性的区域一致,对于错误反应的试验,则与最易混淆情绪的信息性位置一致。即使没有中央凹外信息,需要自上而下的注视引导,注视仍偏向于这些相同的情绪信息区域。然而,有无中央凹外信息时注视的时空序列有所不同。注视模式还能预测刺激呈现条件,这在因中央凹外而非中央凹输入的缺失或存在而产生的差异中最为明显。因此,人们在面部的注视位置不仅通过中央凹视觉的更高分辨率,还通过中央凹外对任务相关信息的提取和注视引导,以及可能通过支撑扫视前注意力的中央凹和中央凹外视觉之间的相互作用,影响确定面部情绪表达的能力。