Institute of Medical Informatics, National Cheng Kung University (NCKU), Tainan, 70101, Taiwan.
Department of Computer Science and Information Engineering, NCKU, Tainan, 70101, Taiwan; Department of Computer Science, Tunghai University, Taichung, 407224, Taiwan.
Neuroimage. 2024 Sep;298:120784. doi: 10.1016/j.neuroimage.2024.120784. Epub 2024 Aug 13.
The perception of two (or more) simultaneous musical notes, depending on their pitch interval(s), could be broadly categorized as consonant or dissonant. Previous literature has suggested that musicians and non-musicians adopt different strategies when discerning music intervals: while musicians rely on the frequency ratios between the two fundamental frequencies, such as "perfect fifth" (3:2) as consonant and "tritone" (45:32) as dissonant intervals; non-musicians may rely on the presence of 'roughness' or 'beats', generated by the difference of fundamental frequencies, as the key elements of 'dissonance'. The separate Event-Related Potential (ERP) differences in N1 and P2 along the midline electrodes provided evidence congruent with such 'separate reliances'. To replicate and to extend, in this study we reran the previous experiment, and separately collected fMRI data of the same protocol (with sparse sampling modifications). The behavioral and EEG results largely corresponded to our previous finding. The fMRI results, with the joint analyses by univariate, psycho-physiological interaction, and representational similarity analysis (RSA) approaches, further reinforce the involvement of central midline-related brain regions, such as ventromedial prefrontal and dorsal anterior cingulate cortex, in consonant/dissonance judgments. The final spatiotemporal searchlight RSA provided convincing evidence that the medial prefrontal cortex, along with the bilateral superior temporal cortex, is the joint locus of midline N1 and dorsal anterior cingulate cortex for the P2 effect (for musicians). Together, these analyses reaffirm that musicians rely more on experience-driven knowledge for consonance/dissonance perception; but also demonstrate the advantages of multiple analyses in constraining the findings from both EEG and fMRI.
两种(或多种)同时出现的乐音,取决于它们的音程间隔,可以大致分为协和或不协和。先前的文献表明,音乐家和非音乐家在辨别音乐音程时采用不同的策略:音乐家依赖于两个基频之间的频率比,如“完全五度”(3:2)为协和音程,“三全音”(45:32)为不协和音程;而非音乐家可能依赖于基频差异产生的“粗糙感”或“拍音”,作为“不协和”的关键要素。中线上电极的事件相关电位(ERP)的差异提供了与这种“独立依赖”相一致的证据。为了复制和扩展,在这项研究中,我们重新进行了先前的实验,并分别收集了相同方案的 fMRI 数据(带有稀疏采样修改)。行为和 EEG 结果在很大程度上与我们之前的发现一致。fMRI 结果,通过单变量、心理生理相互作用和表示相似性分析(RSA)方法的联合分析,进一步证实了与中央中线相关的大脑区域(如腹侧前额叶和背侧前扣带皮层)的参与,在协和/不协和判断中。最终的时空搜索光 RSA 提供了令人信服的证据,表明内侧前额叶皮层与双侧颞上回皮层一起,是中线 N1 和背侧前扣带皮层的 P2 效应的共同位置(对音乐家而言)。综上所述,这些分析再次证实,音乐家更依赖于经验驱动的知识来感知协和/不协和;但也展示了多种分析在限制 EEG 和 fMRI 发现方面的优势。