Cognitive Neuroscience Unit, Montreal Neurological Institute, McGill University, Montreal, QC, Canada.
International Laboratory for Brain, Music and Sound Research (BRAMS); Centre for Research in Brain, Language and Music; Centre for Interdisciplinary Research in Music, Media, and Technology, Montreal, QC, Canada.
Science. 2020 Feb 28;367(6481):1043-1047. doi: 10.1126/science.aaz3468.
Does brain asymmetry for speech and music emerge from acoustical cues or from domain-specific neural networks? We selectively filtered temporal or spectral modulations in sung speech stimuli for which verbal and melodic content was crossed and balanced. Perception of speech decreased only with degradation of temporal information, whereas perception of melodies decreased only with spectral degradation. Functional magnetic resonance imaging data showed that the neural decoding of speech and melodies depends on activity patterns in left and right auditory regions, respectively. This asymmetry is supported by specific sensitivity to spectrotemporal modulation rates within each region. Finally, the effects of degradation on perception were paralleled by their effects on neural classification. Our results suggest a match between acoustical properties of communicative signals and neural specializations adapted to that purpose.
言语和音乐的大脑不对称性是源自声学线索还是特定于领域的神经网络?我们选择性地过滤了歌唱语音刺激中的时间或光谱调制,其中言语和旋律内容是交叉和平衡的。只有在时间信息退化的情况下,言语感知才会下降,而只有在光谱退化的情况下,旋律感知才会下降。功能磁共振成像数据显示,言语和旋律的神经解码分别依赖于左右听觉区域的活动模式。这种不对称性得到了每个区域内对频谱时间调制率的特定敏感性的支持。最后,退化对感知的影响与对神经分类的影响相平行。我们的研究结果表明,交际信号的声学特性与适应该目的的神经特化之间存在匹配。