School of Psychological Sciences and Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel.
Trends Cogn Sci. 2013 Jun;17(6):263-71. doi: 10.1016/j.tics.2013.04.004. Epub 2013 May 10.
Both faces and voices are rich in socially-relevant information, which humans are remarkably adept at extracting, including a person's identity, age, gender, affective state, personality, etc. Here, we review accumulating evidence from behavioral, neuropsychological, electrophysiological, and neuroimaging studies which suggest that the cognitive and neural processing mechanisms engaged by perceiving faces or voices are highly similar, despite the very different nature of their sensory input. The similarity between the two mechanisms likely facilitates the multi-modal integration of facial and vocal information during everyday social interactions. These findings emphasize a parsimonious principle of cerebral organization, where similar computational problems in different modalities are solved using similar solutions.
人脸和语音都富含丰富的社会相关信息,人类在提取这些信息方面非常在行,包括一个人的身份、年龄、性别、情绪状态、个性等。在这里,我们回顾了来自行为、神经心理学、电生理学和神经影像学研究的累积证据,这些研究表明,尽管感知人脸或语音的感觉输入非常不同,但所涉及的认知和神经处理机制非常相似。这两个机制的相似性可能有助于在日常社交互动中对面部和语音信息进行多模态整合。这些发现强调了大脑组织的一个简约原则,即不同模态中相似的计算问题使用相似的解决方案来解决。