Xia Yuqi, Ren Lei, Zhang Xuehao, Huang Yan, Wei Chaogang, Liu Yuhe
Department of Otolaryngology, Head and Neck Surgery, Beijing Friendship Hospital, Capital Medical University, China.
Department of Otolaryngology, Head and Neck Surgery, Peking University First Hospital, Beijing, China.
J Speech Lang Hear Res. 2025 Aug 12;68(8):4139-4157. doi: 10.1044/2025_JSLHR-23-00191. Epub 2025 Jul 17.
Cochlear implant (CI) listeners have deficits in emotional perception due to limited spectrotemporal fine structure. Contralateral hearing aids (HAs) carry additional acoustic cues for emotion recognition and improve the quality of life (QoL) in these individuals. This study aimed to investigate the effects of HAs on voice emotion recognition in Mandarin-speaking bimodal adults.
Nineteen Mandarin-speaking bimodal adults ( = 30.63 ± 8.73 years) and 20 normal-hearing (NH) adults ( = 27.15 ± 4.61 years) completed voice emotion (happy, angry, sad, scared, and neutral) recognition and monosyllable recognition tasks. Bimodal listeners completed voice emotion recognition and monosyllable recognition tasks with bimodal listening and CI-alone listening. Health-related QoL in bimodal listeners was evaluated using the Chinese version of the Nijmegen Cochlear Implant Questionnaire (NCIQ).
Acoustic analyses showed substantial variations across emotions in voice emotion utterances, mainly in measures of the mean fundamental frequency (0), 0 range, and duration. NH listeners significantly outperformed bimodal listeners in voice emotion recognition and monosyllable recognition tasks, with significantly higher accuracy scores, Hu values, and shorter reaction times. Participants were mainly affected by 0 cues in the voice emotion recognition task. Bimodal listeners perceived voice emotions more accurately and faster with bimodal devices than with CI alone, suggesting improved accuracy and decreased listening effort with the addition of HAs. Voice emotion recognition accuracy was associated with residual hearing in the nonimplanted ear and monosyllable recognition accuracy in bimodal listeners. The NCIQ scores were not significantly correlated with the accuracy scores for either speech recognition or voice emotion recognition in bimodal listeners after correction for multiple comparisons.
Despite experiencing more challenges than NH peers, Mandarin-speaking bimodal listeners showed improved voice emotion perception when using contralateral HAs. Bimodal listeners with better residual hearing in the nonimplanted ear and better speech recognition ability showed better voice emotion perception.
由于频谱时间精细结构有限,人工耳蜗(CI)使用者在情绪感知方面存在缺陷。对侧助听器(HA)携带额外的声学线索用于情绪识别,并改善这些个体的生活质量(QoL)。本研究旨在调查HA对说普通话的双模式成年使用者语音情绪识别的影响。
19名说普通话的双模式成年使用者(年龄 = 30.63 ± 8.73岁)和20名听力正常(NH)的成年使用者(年龄 = 27.15 ± 4.61岁)完成了语音情绪(高兴、愤怒、悲伤、恐惧和中性)识别和单音节识别任务。双模式使用者通过双模式聆听和仅使用CI聆听完成语音情绪识别和单音节识别任务。使用中文版奈梅亨人工耳蜗问卷(NCIQ)评估双模式使用者的健康相关生活质量。
声学分析表明,语音情绪话语中的情绪之间存在显著差异,主要体现在平均基频(F0)、F0范围和时长的测量上。NH使用者在语音情绪识别和单音节识别任务中的表现明显优于双模式使用者,准确率得分、Hu值显著更高,反应时间更短。在语音情绪识别任务中,参与者主要受F0线索的影响。与仅使用CI相比,双模式使用者使用双模式设备时能更准确、更快地感知语音情绪,这表明添加HA后准确率提高且聆听努力减少。语音情绪识别准确率与双模式使用者未植入耳的残余听力及单音节识别准确率相关。在进行多重比较校正后,双模式使用者的NCIQ得分与语音识别或语音情绪识别的准确率得分均无显著相关性。
尽管比NH同龄人面临更多挑战,但说普通话的双模式使用者在使用对侧HA时语音情绪感知有所改善。未植入耳残余听力较好且语音识别能力较好的双模式使用者语音情绪感知更好。