Center for Information and Neural Networks, National Institute of Information and Communications Technology, Suita, Japan.
Graduate School of Frontier Biosciences, Osaka University, Suita, Japan.
Brain Behav. 2021 Jan;11(1):e01936. doi: 10.1002/brb3.1936. Epub 2020 Nov 8.
Humans tend to categorize auditory stimuli into discrete classes, such as animal species, language, musical instrument, and music genre. Of these, music genre is a frequently used dimension of human music preference and is determined based on the categorization of complex auditory stimuli. Neuroimaging studies have reported that the superior temporal gyrus (STG) is involved in response to general music-related features. However, there is considerable uncertainty over how discrete music categories are represented in the brain and which acoustic features are more suited for explaining such representations.
We used a total of 540 music clips to examine comprehensive cortical representations and the functional organization of music genre categories. For this purpose, we applied a voxel-wise modeling approach to music-evoked brain activity measured using functional magnetic resonance imaging. In addition, we introduced a novel technique for feature-brain similarity analysis and assessed how discrete music categories are represented based on the cortical response pattern to acoustic features.
Our findings indicated distinct cortical organizations for different music genres in the bilateral STG, and they revealed representational relationships between different music genres. On comparing different acoustic feature models, we found that these representations of music genres could be explained largely by a biologically plausible spectro-temporal modulation-transfer function model.
Our findings have elucidated the quantitative representation of music genres in the human cortex, indicating the possibility of modeling this categorization of complex auditory stimuli based on brain activity.
人类倾向于将听觉刺激分类为离散的类别,例如动物种类、语言、乐器和音乐类型。在这些类别中,音乐类型是人类音乐偏好的常用维度,是基于对复杂听觉刺激的分类来确定的。神经影像学研究报告称,颞上回(STG)参与了对一般音乐相关特征的反应。然而,对于离散音乐类别在大脑中的表示方式以及哪些声音特征更适合解释这些表示方式,仍然存在很大的不确定性。
我们使用了总共 540 个音乐片段来检查大脑中全面的皮质表示和音乐类型类别的功能组织。为此,我们应用了基于体素的建模方法,对使用功能磁共振成像测量的音乐诱发的大脑活动进行了分析。此外,我们引入了一种新的特征 - 大脑相似性分析技术,并评估了离散音乐类型如何基于对声音特征的皮质反应模式进行表示。
我们的研究结果表明,不同音乐类型在双侧颞上回存在明显的皮质组织,并且揭示了不同音乐类型之间的代表性关系。在比较不同的声音特征模型时,我们发现这些音乐类型的表示可以很大程度上用生物上合理的频谱 - 时间调制传递函数模型来解释。
我们的研究结果阐明了人类大脑中音乐类型的定量表示,表明有可能基于大脑活动对复杂听觉刺激的这种分类进行建模。