Department of Psychology, The Chinese University of Hong Kong, Shatin, Hong Kong SAR, China
Music Department, Max-Planck-Institute for Empirical Aesthetics, Frankfurt 60322, Germany.
J Neurosci. 2024 Jul 24;44(30):e1331232024. doi: 10.1523/JNEUROSCI.1331-23.2024.
Music, like spoken language, is often characterized by hierarchically organized structure. Previous experiments have shown neural tracking of notes and beats, but little work touches on the more abstract question: how does the brain establish high-level musical structures in real time? We presented Bach chorales to participants (20 females and 9 males) undergoing electroencephalogram (EEG) recording to investigate how the brain tracks musical phrases. We removed the main temporal cues to phrasal structures, so that listeners could only rely on harmonic information to parse a continuous musical stream. Phrasal structures were disrupted by locally or globally reversing the harmonic progression, so that our observations on the original music could be controlled and compared. We first replicated the findings on neural tracking of musical notes and beats, substantiating the positive correlation between musical training and neural tracking. Critically, we discovered a neural signature in the frequency range ∼0.1 Hz (modulations of EEG power) that reliably tracks musical phrasal structure. Next, we developed an approach to quantify the phrasal phase precession of the EEG power, revealing that phrase tracking is indeed an operation of active segmentation involving predictive processes. We demonstrate that the brain establishes complex musical structures online over long timescales (>5 s) and actively segments continuous music streams in a manner comparable to language processing. These two neural signatures, phrase tracking and phrasal phase precession, provide new conceptual and technical tools to study the processes underpinning high-level structure building using noninvasive recording techniques.
音乐和口语一样,通常具有层次化的结构。先前的实验已经表明,大脑能够跟踪音符和节拍,但很少有研究触及更抽象的问题:大脑如何实时建立高级音乐结构?我们向接受脑电图(EEG)记录的参与者(20 名女性和 9 名男性)展示了巴赫众赞歌,以研究大脑如何跟踪音乐乐句。我们去除了乐句结构的主要时间线索,以便听众只能依赖和声信息来解析连续的音乐流。通过局部或全局反转和声进行来破坏乐句结构,因此我们可以对原始音乐进行控制和比较。我们首先复制了对音乐音符和节拍的神经跟踪的发现,证实了音乐训练与神经跟踪之间的正相关关系。关键的是,我们在 ∼0.1 Hz 的频率范围内(EEG 功率的调制)发现了一个可靠地跟踪音乐乐句结构的神经特征。接下来,我们开发了一种方法来量化 EEG 功率的相位超前,揭示了乐句跟踪实际上是一种涉及预测过程的主动分割操作。我们证明,大脑能够在线建立复杂的音乐结构,时间跨度超过 5 秒,并以类似于语言处理的方式主动分割连续的音乐流。这两个神经特征,即乐句跟踪和相位超前,为使用非侵入性记录技术研究高级结构构建背后的过程提供了新的概念和技术工具。