Giri Amita, Smith Grace, Manting Cassia, Dobs Katharina, Adler Amir, Pantazis Dimitrios
McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA.
Department of Electronics & Communication Engineering, Indian Institute of Technology, Roorkee, Uttarakhand, India.
bioRxiv. 2025 Jun 3:2025.06.03.657646. doi: 10.1101/2025.06.03.657646.
The human brain can effortlessly extract a familiar face's age, gender, and identity despite dramatic changes in appearance, such as head orientation, lighting, or expression. Yet, the spatiotemporal dynamics underlying this ability, and how they depend on task demands, remain unclear. Here, we used multivariate decoding of magnetoencephalography (MEG) responses and source localization to characterize the emergence of invariant face representations. Human participants viewed natural images of highly familiar celebrities that systematically varied in viewpoint, gender, and age, while performing a one-back task on the identity or the image. Time-resolved decoding revealed that identity information emerged rapidly and became increasingly invariant to viewpoint over time. We observed a temporal hierarchy: view-specific identity information appeared at 64 ms, followed by mirror-invariant representations at 75 ms and fully view-invariant identity at 89 ms. Identity-invariant age and gender information emerged around the same time as view-invariant identity. Task demands modulated only late-stage identity and gender representations, suggesting that early face processing is predominantly feedforward. Source localization at peak decoding times showed consistent involvement of the occipital face area (OFA) and fusiform face area (FFA), with stronger identity and age signals than gender. Our findings reveal the spatiotemporal dynamics by which the brain extracts view-invariant identity from familiar faces, suggest that age and gender are processed in parallel, and show that task demands modulate later processing stages. Together, these results offer new constraints on computational models of face perception.
尽管面部外观会发生显著变化,如头部朝向、光照或表情等,但人类大脑仍能轻松提取熟悉面孔的年龄、性别和身份信息。然而,这种能力背后的时空动态以及它们如何依赖于任务需求,仍不清楚。在这里,我们使用脑磁图(MEG)反应的多变量解码和源定位来表征不变面部表征的出现。人类参与者观看了高度熟悉的名人的自然图像,这些图像在视角、性别和年龄上系统地变化,同时对身份或图像执行一个回溯任务。时间分辨解码显示,身份信息迅速出现,并随着时间的推移对视角变得越来越不变。我们观察到一个时间层次结构:特定视角的身份信息在64毫秒出现,随后是75毫秒的镜像不变表征,以及89毫秒的完全视角不变身份。身份不变的年龄和性别信息与视角不变的身份信息同时出现。任务需求仅调节后期的身份和性别表征,这表明早期面部处理主要是前馈的。在峰值解码时间的源定位显示枕叶面部区域(OFA)和梭状面部区域(FFA)持续参与,身份和年龄信号比性别信号更强。我们的研究结果揭示了大脑从熟悉面孔中提取视角不变身份的时空动态,表明年龄和性别是并行处理的,并表明任务需求调节后期处理阶段。总之,这些结果为面部感知的计算模型提供了新的限制。