Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands.
Graduate School of Medical Sciences, Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Groningen, The Netherlands.
Trends Hear. 2023 Jan-Dec;27:23312165221141142. doi: 10.1177/23312165221141142.
While previous research investigating music emotion perception of cochlear implant (CI) users observed that temporal cues informing tempo largely convey emotional arousal (relaxing/stimulating), it remains unclear how other properties of the temporal content may contribute to the transmission of arousal features. Moreover, while detailed spectral information related to pitch and harmony in music - often not well perceived by CI users- reportedly conveys emotional valence (positive, negative), it remains unclear how the quality of spectral content contributes to valence perception. Therefore, the current study used vocoders to vary temporal and spectral content of music and tested music emotion categorization (joy, fear, serenity, sadness) in 23 normal-hearing participants. Vocoders were varied with two carriers (sinewave or noise; primarily modulating temporal information), and two filter orders (low or high; primarily modulating spectral information). Results indicated that emotion categorization was above-chance in vocoded excerpts but poorer than in a non-vocoded control condition. Among vocoded conditions, better temporal content (sinewave carriers) improved emotion categorization with a large effect while better spectral content (high filter order) improved it with a small effect. Arousal features were comparably transmitted in non-vocoded and vocoded conditions, indicating that lower temporal content successfully conveyed emotional arousal. Valence feature transmission steeply declined in vocoded conditions, revealing that valence perception was difficult for both lower and higher spectral content. The reliance on arousal information for emotion categorization of vocoded music suggests that efforts to refine temporal cues in the CI user signal may immediately benefit their music emotion perception.
虽然之前的研究调查了人工耳蜗(CI)使用者的音乐情感感知,发现提示节奏的时间线索主要传达情感唤醒(放松/刺激),但其他时间内容的属性如何有助于传递唤醒特征尚不清楚。此外,虽然与音乐中的音调和和声相关的详细频谱信息——通常不易被 CI 用户感知——据报道可以传达情感效价(积极、消极),但频谱内容的质量如何有助于效价感知仍不清楚。因此,本研究使用声码器来改变音乐的时间和频谱内容,并在 23 名正常听力参与者中测试音乐情感分类(喜悦、恐惧、宁静、悲伤)。声码器有两种载波(正弦波或噪声;主要调制时间信息)和两种滤波器阶数(低或高;主要调制频谱信息)。结果表明,声码化片段中的情感分类高于偶然,但低于非声码化对照条件。在声码化条件中,更好的时间内容(正弦波载波)提高了情绪分类,效果较大,而更好的频谱内容(高滤波器阶数)提高了情绪分类,效果较小。唤醒特征在非声码化和声码化条件下都有类似的传递,表明较低的时间内容成功地传达了情感唤醒。在声码化条件下,效价特征的传递急剧下降,表明无论是较低还是较高的频谱内容,效价感知都很困难。声码化音乐情感分类对唤醒信息的依赖表明,努力改进 CI 用户信号中的时间线索可能会立即改善他们的音乐情感感知。