Max Planck Institute for Human Cognitive and Brain Sciences, 04103 Leipzig, Germany.
Max Planck Institute for Human Cognitive and Brain Sciences, 04103 Leipzig, Germany; International Max Planck Research School on Neuroscience of Communication, 04103 Leipzig, Germany.
Neuropsychologia. 2018 Jul 31;116(Pt B):179-193. doi: 10.1016/j.neuropsychologia.2018.03.039. Epub 2018 Mar 31.
Humans have a remarkable skill for voice-identity recognition: most of us can remember many voices that surround us as 'unique'. In this review, we explore the computational and neural mechanisms which may support our ability to represent and recognise a unique voice-identity. We examine the functional architecture of voice-sensitive regions in the superior temporal gyrus/sulcus, and bring together findings on how these regions may interact with each other, and additional face-sensitive regions, to support voice-identity processing. We also contrast findings from studies on neurotypicals and clinical populations which have examined the processing of familiar and unfamiliar voices. Taken together, the findings suggest that representations of familiar and unfamiliar voices might dissociate in the human brain. Such an observation does not fit well with current models for voice-identity processing, which by-and-large assume a common sequential analysis of the incoming voice signal, regardless of voice familiarity. We provide a revised audio-visual integrative model of voice-identity processing which brings together traditional and prototype models of identity processing. This revised model includes a mechanism of how voice-identity representations are established and provides a novel framework for understanding and examining the potential differences in familiar and unfamiliar voice processing in the human brain.
我们大多数人都可以记住周围许多独特的声音。在这篇综述中,我们探讨了可能支持我们代表和识别独特声音身份的计算和神经机制。我们检查了颞上回/脑回中对声音敏感的区域的功能架构,并将这些区域如何相互作用以及与其他面部敏感区域相互作用以支持声音身份处理的发现结合起来。我们还对比了研究神经典型人群和临床人群的发现,这些研究检查了熟悉和不熟悉的声音的处理。总之,这些发现表明,熟悉和不熟悉的声音的表示可能在人类大脑中分离。这种观察结果与目前的声音身份处理模型不太相符,这些模型在很大程度上假设对传入声音信号进行通用的顺序分析,而不管声音的熟悉程度如何。我们提供了一种修订后的视听综合声音身份处理模型,该模型结合了身份处理的传统和原型模型。该修订模型包括建立声音身份表示的机制,并为理解和检查人类大脑中熟悉和不熟悉的声音处理的潜在差异提供了新的框架。