Bisesi Erica, Friberg Anders, Parncutt Richard
Centre for Systematic Musicology, University of Graz, Graz, Austria.
Laboratory "Perception and Memory", Department of Neuroscience, Institut Pasteur, Paris, France.
Front Psychol. 2019 Mar 29;10:317. doi: 10.3389/fpsyg.2019.00317. eCollection 2019.
Accents are local musical events that attract the attention of the listener, and can be either immanent (evident from the score) or performed (added by the performer). Immanent accents involve temporal grouping (phrasing), meter, melody, and harmony; performed accents involve changes in timing, dynamics, articulation, and timbre. In the past, grouping, metrical and melodic accents were investigated in the context of expressive music performance. We present a novel computational model of immanent accent salience in tonal music that automatically predicts the positions and saliences of metrical, melodic and harmonic accents. The model extends previous research by improving on preliminary formulations of metrical and melodic accents and introducing a new model for harmonic accents that combines harmonic dissonance and harmonic surprise. In an analysis-by-synthesis approach, model predictions were compared with data from two experiments, respectively involving 239 sonorities and 638 sonorities, and 16 musicians and 5 experts in music theory. Average pair-wise correlations between raters were lower for metrical (0.27) and melodic accents (0.37) than for harmonic accents (0.49). In both experiments, when combining all the raters into a single measure expressing their consensus, correlations between ratings and model predictions ranged from 0.43 to 0.62. When different accent categories of accents were combined together, correlations were higher than for separate categories ( = 0.66). This suggests that raters might use strategies different from individual metrical, melodic or harmonic accent models to mark the musical events.
重音是吸引听众注意力的局部音乐事件,可分为内在重音(从乐谱中明显看出)或演奏重音(由演奏者添加)。内在重音涉及时间分组(乐句划分)、节拍、旋律与和声;演奏重音涉及节奏、力度、发音和音色的变化。过去,分组重音、节拍重音和旋律重音是在富有表现力的音乐表演背景下进行研究的。我们提出了一种新颖的调性音乐内在重音显著性计算模型,该模型能自动预测节拍、旋律与和声重音的位置及显著性。该模型通过改进节拍和旋律重音的初步公式,并引入一种结合和声不协和与和声意外的和声重音新模型,扩展了先前的研究。在一种分析合成方法中,将模型预测与来自两个实验的数据进行了比较,这两个实验分别涉及239个音响和638个音响,以及16名音乐家和5名音乐理论专家。节拍重音(0.27)和旋律重音(0.37)的评分者之间的平均两两相关性低于和声重音(0.49)。在两个实验中,当将所有评分者合并为一个表示他们共识的单一度量时,评分与模型预测之间的相关性在0.43至0.62之间。当不同类别的重音组合在一起时,相关性高于单独类别时(=0.66)。这表明评分者可能使用不同于单个节拍、旋律或和声重音模型的策略来标记音乐事件。