Blank Helen, Kiebel Stefan J, von Kriegstein Katharina
Max Planck Institute for Human Cognitive and Brain Sciences, 04103, Leipzig, Germany; MRC Cognition and Brain Sciences Unit, Cambridge CB2 7EF, United Kingdom.
Hum Brain Mapp. 2015 Jan;36(1):324-39. doi: 10.1002/hbm.22631. Epub 2014 Sep 13.
Recognizing the identity of other individuals across different sensory modalities is critical for successful social interaction. In the human brain, face- and voice-sensitive areas are separate, but structurally connected. What kind of information is exchanged between these specialized areas during cross-modal recognition of other individuals is currently unclear. For faces, specific areas are sensitive to identity and to physical properties. It is an open question whether voices activate representations of face identity or physical facial properties in these areas. To address this question, we used functional magnetic resonance imaging in humans and a voice-face priming design. In this design, familiar voices were followed by morphed faces that matched or mismatched with respect to identity or physical properties. The results showed that responses in face-sensitive regions were modulated when face identity or physical properties did not match to the preceding voice. The strength of this mismatch signal depended on the level of certainty the participant had about the voice identity. This suggests that both identity and physical property information was provided by the voice to face areas. The activity and connectivity profiles differed between face-sensitive areas: (i) the occipital face area seemed to receive information about both physical properties and identity, (ii) the fusiform face area seemed to receive identity, and (iii) the anterior temporal lobe seemed to receive predominantly identity information from the voice. We interpret these results within a prediction coding scheme in which both identity and physical property information is used across sensory modalities to recognize individuals.
在不同的感官模态中识别他人身份对于成功的社交互动至关重要。在人类大脑中,对面部和声音敏感的区域是分开的,但在结构上相互连接。目前尚不清楚在跨模态识别他人的过程中,这些专门区域之间交换的是何种信息。对于面部而言,特定区域对身份和物理属性敏感。声音是否会激活这些区域中的面部身份表征或面部物理属性,这仍是一个悬而未决的问题。为了解决这个问题,我们对人类使用了功能磁共振成像以及一种语音-面部启动设计。在这种设计中,熟悉的声音之后会呈现出在身份或物理属性上匹配或不匹配的变形面部。结果表明,当面部身份或物理属性与之前的声音不匹配时,面部敏感区域的反应会受到调节。这种不匹配信号的强度取决于参与者对声音身份的确定程度。这表明声音向面部区域提供了身份和物理属性信息。面部敏感区域的活动和连接模式有所不同:(i)枕叶面部区域似乎接收有关物理属性和身份的信息,(ii)梭状面部区域似乎接收身份信息,(iii)颞叶前部似乎主要从声音中接收身份信息。我们在预测编码框架内解释这些结果,即在跨感官模态中使用身份和物理属性信息来识别个体。