Laboratory of Brain & Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, Korea.
Ear and Interaction Center, Doheun Institute for Digital Innovation in Medicine (D.I.D.I.M.), Hallym University Medical Center, Anyang, Korea.
J Korean Med Sci. 2023 Mar 27;38(12):e82. doi: 10.3346/jkms.2023.38.e82.
Many studies have examined the perception of musical emotion using excerpts from familiar music that includes highly expressed emotions to classify emotional choices. However, using familiar music to study musical emotions in people with acquired hearing loss could produce ambiguous results as to whether the emotional perception is due to previous experiences or listening to the current musical stimuli. To overcome this limitation, we developed new musical stimuli to study emotional perception without the effects of episodic memory.
A musician was instructed to compose five melodies with evenly distributed pitches around 1 kHz. The melodies were created to express the emotions of , , , , and . To evaluate whether these melodies expressed the intended emotions, two methods were applied. First, we classified the expressed emotions of melodies with selected musical features from 60 features using genetic algorithm-based -nearest neighbors. Second, forty-four people with normal hearing participated in an online survey regarding the emotional perception of music based on dimensional and discrete approaches to evaluate the musical stimuli set.
Twenty-four selected musical features produced classification for intended emotions with an accuracy of 76%. The results of the online survey in the normal hearing (NH) group showed that the intended emotions were selected significantly more often than the others. K-means clustering analysis revealed that melodies with arousal and valence ratings corresponded to representative quadrants of interest. Additionally, the applicability of the stimuli was tested in 4 individuals with high-frequency hearing loss.
By applying the individuals with NH, the musical stimuli were shown to classify emotions with high accuracy, as expressed. These results confirm that the set of musical stimuli can be used to study the perceived emotion in music, demonstrating the validity of the musical stimuli, independent of innate musical bias such as due to episodic memory. Furthermore, musical stimuli could be helpful for further studying perceived musical emotion in people with hearing loss because of the controlled pitch for each emotion.
许多研究使用包含高度表达情感的熟悉音乐片段来分类情感选择,以研究对音乐情感的感知。然而,在听力损失患者中使用熟悉的音乐来研究音乐情感可能会产生歧义,即情感感知是由于先前的经验还是当前的音乐刺激。为了克服这一局限性,我们开发了新的音乐刺激来研究没有情节记忆影响的情感感知。
一位音乐家被指示创作五首旋律,其音高均匀分布在 1 kHz 左右。这些旋律旨在表达、、、和的情感。为了评估这些旋律是否表达了预期的情感,我们应用了两种方法。首先,我们使用基于遗传算法的最近邻分类法从 60 个特征中选择与旋律表达的情感相关的特征。其次,44 名听力正常的人参与了一项在线调查,他们根据维度和离散方法评估音乐的情感感知,以评估音乐刺激集。
24 个选定的音乐特征产生了对预期情感的分类,准确率为 76%。听力正常组(NH)的在线调查结果显示,预期的情感比其他情感更常被选中。K-均值聚类分析表明,具有唤醒度和效价评分的旋律与感兴趣的代表性象限相对应。此外,还在 4 名高频听力损失患者中测试了刺激的适用性。
通过应用听力正常的个体,音乐刺激被证明可以准确地分类情感,与预期的一致。这些结果证实了音乐刺激集可以用于研究音乐感知中的情感,证明了音乐刺激的有效性,独立于内在的音乐偏见,如由于情节记忆。此外,由于每个情感的受控音高,音乐刺激对于进一步研究听力损失患者的感知音乐情感可能会有所帮助。