Suppr超能文献

人工耳蜗植入儿童及其听力正常同伴情绪韵律识别的预测因素。

Predictors of Emotional Prosody Identification by School-Age Children With Cochlear Implants and Their Peers With Normal Hearing.

机构信息

Auditory Prostheses & Perception Laboratory, Center for Hearing Research, Boys Town National Research Hospital, Omaha, Nebraska, USA.

Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, California, USA.

出版信息

Ear Hear. 2024;45(2):411-424. doi: 10.1097/AUD.0000000000001436. Epub 2023 Oct 9.

Abstract

OBJECTIVES

Children with cochlear implants (CIs) vary widely in their ability to identify emotions in speech. The causes of this variability are unknown, but this knowledge will be crucial if we are to design improvements in technological or rehabilitative interventions that are effective for individual patients. The objective of this study was to investigate how well factors such as age at implantation, duration of device experience (hearing age), nonverbal cognition, vocabulary, and socioeconomic status predict prosody-based emotion identification in children with CIs, and how the key predictors in this population compare to children with normal hearing who are listening to either normal emotional speech or to degraded speech.

DESIGN

We measured vocal emotion identification in 47 school-age CI recipients aged 7 to 19 years in a single-interval, 5-alternative forced-choice task. None of the participants had usable residual hearing based on parent/caregiver report. Stimuli consisted of a set of semantically emotion-neutral sentences that were recorded by 4 talkers in child-directed and adult-directed prosody corresponding to five emotions: neutral, angry, happy, sad, and scared. Twenty-one children with normal hearing were also tested in the same tasks; they listened to both original speech and to versions that had been noise-vocoded to simulate CI information processing.

RESULTS

Group comparison confirmed the expected deficit in CI participants' emotion identification relative to participants with normal hearing. Within the CI group, increasing hearing age (correlated with developmental age) and nonverbal cognition outcomes predicted emotion recognition scores. Stimulus-related factors such as talker and emotional category also influenced performance and were involved in interactions with hearing age and cognition. Age at implantation was not predictive of emotion identification. Unlike the CI participants, neither cognitive status nor vocabulary predicted outcomes in participants with normal hearing, whether listening to original speech or CI-simulated speech. Age-related improvements in outcomes were similar in the two groups. Participants with normal hearing listening to original speech showed the greatest differences in their scores for different talkers and emotions. Participants with normal hearing listening to CI-simulated speech showed significant deficits compared with their performance with original speech materials, and their scores also showed the least effect of talker- and emotion-based variability. CI participants showed more variation in their scores with different talkers and emotions than participants with normal hearing listening to CI-simulated speech, but less so than participants with normal hearing listening to original speech.

CONCLUSIONS

Taken together, these results confirm previous findings that pediatric CI recipients have deficits in emotion identification based on prosodic cues, but they improve with age and experience at a rate that is similar to peers with normal hearing. Unlike participants with normal hearing, nonverbal cognition played a significant role in CI listeners' emotion identification. Specifically, nonverbal cognition predicted the extent to which individual CI users could benefit from some talkers being more expressive of emotions than others, and this effect was greater in CI users who had less experience with their device (or were younger) than CI users who had more experience with their device (or were older). Thus, in young prelingually deaf children with CIs performing an emotional prosody identification task, cognitive resources may be harnessed to a greater degree than in older prelingually deaf children with CIs or than children with normal hearing.

摘要

目的

人工耳蜗植入儿童在言语中识别情绪的能力存在很大差异。造成这种差异的原因尚不清楚,但如果我们要设计出对个体患者有效的技术或康复干预措施的改进,这些知识将是至关重要的。本研究的目的是调查在人工耳蜗植入儿童中,年龄、设备使用时间(听觉年龄)、非言语认知、词汇量和社会经济地位等因素对基于韵律的情绪识别的预测作用,以及这些关键预测因素在该人群中的表现与正常听力儿童相比如何,正常听力儿童分别在正常情绪语音或语音失真的情况下进行了测试。

设计

我们在一个单一间隔的 5 种选择强制选择任务中测量了 47 名 7 至 19 岁的在校人工耳蜗植入儿童的声音情绪识别能力。所有参与者均根据家长/照顾者的报告,没有可利用的残余听力。刺激包括一组语义上情绪中性的句子,由 4 位说话者以儿童导向和成人导向的韵律录制,对应于五种情绪:中性、愤怒、快乐、悲伤和恐惧。21 名正常听力的儿童也在相同的任务中进行了测试;他们既听了原始语音,也听了经过噪声编码模拟人工耳蜗信息处理的语音。

结果

组间比较证实了人工耳蜗植入儿童在情绪识别方面相对于正常听力儿童的预期缺陷。在人工耳蜗植入组中,听觉年龄(与发育年龄相关)和非言语认知结果的增加预测了情绪识别分数。与说话者和情绪类别相关的刺激因素也影响了表现,并与听觉年龄和认知相互作用。植入年龄与情绪识别无关。与正常听力儿童不同,无论是在听原始语音还是模拟人工耳蜗语音的情况下,认知状态或词汇量都不能预测正常听力儿童的结果。认知表现随年龄的提高在两组中相似。正常听力儿童听原始语音时,不同说话者和情绪的分数差异最大。正常听力儿童听模拟人工耳蜗语音的表现与听原始语音材料时相比有明显的缺陷,而且他们的分数受说话者和情绪变化的影响最小。人工耳蜗植入儿童在不同说话者和情绪方面的分数变化比正常听力儿童听模拟人工耳蜗语音的分数变化更大,但比正常听力儿童听原始语音的分数变化小。

结论

综上所述,这些结果证实了之前的发现,即儿科人工耳蜗植入儿童在基于韵律的情绪识别方面存在缺陷,但他们的认知能力随着年龄的增长和设备使用经验的增加而提高,其提高速度与正常听力儿童相似。与正常听力儿童不同的是,非言语认知在人工耳蜗使用者的情绪识别中起着重要作用。具体来说,非言语认知预测了个体人工耳蜗使用者从某些说话者那里获得的情绪表达比其他说话者更多的程度,而且这种效果在设备使用经验较少(或年龄较小)的人工耳蜗使用者中比设备使用经验较多(或年龄较大)的人工耳蜗使用者中更大。因此,在进行情感韵律识别任务的人工耳蜗植入年轻学龄前儿童中,认知资源的利用程度可能比年龄较大的人工耳蜗植入学龄前儿童或正常听力儿童更高。

相似文献

2
Perception of Child-Directed Versus Adult-Directed Emotional Speech in Pediatric Cochlear Implant Users.
Ear Hear. 2020 Sep/Oct;41(5):1372-1382. doi: 10.1097/AUD.0000000000000862.
4
Voice Emotion Recognition by Mandarin-Speaking Children with Cochlear Implants.
Ear Hear. 2022 Jan/Feb;43(1):165-180. doi: 10.1097/AUD.0000000000001085.
5
How Vocal Emotions Produced by Children With Cochlear Implants Are Perceived by Their Hearing Peers.
J Speech Lang Hear Res. 2019 Oct 25;62(10):3728-3740. doi: 10.1044/2019_JSLHR-S-18-0497. Epub 2019 Oct 7.
6
Voice Emotion Recognition by Children With Mild-to-Moderate Hearing Loss.
Ear Hear. 2019 May/Jun;40(3):477-492. doi: 10.1097/AUD.0000000000000637.
7
Age-Related Changes in Voice Emotion Recognition by Postlingually Deafened Listeners With Cochlear Implants.
Ear Hear. 2022 Mar/Apr;43(2):323-334. doi: 10.1097/AUD.0000000000001095.
9
Effects of Age and Hearing Loss on the Recognition of Emotions in Speech.
Ear Hear. 2019 Sep/Oct;40(5):1069-1083. doi: 10.1097/AUD.0000000000000694.

引用本文的文献

1
Sensorineural hearing loss and cognitive impairment: three hypotheses.
Front Aging Neurosci. 2024 Feb 28;16:1368232. doi: 10.3389/fnagi.2024.1368232. eCollection 2024.

本文引用的文献

1
Voice emotion recognition by Mandarin-speaking pediatric cochlear implant users in Taiwan.
Laryngoscope Investig Otolaryngol. 2022 Jan 13;7(1):250-258. doi: 10.1002/lio2.732. eCollection 2022 Feb.
3
Vocal tract shaping of emotional speech.
Comput Speech Lang. 2020 Nov;64. doi: 10.1016/j.csl.2020.101100. Epub 2020 Apr 16.
6
Perception of Child-Directed Versus Adult-Directed Emotional Speech in Pediatric Cochlear Implant Users.
Ear Hear. 2020 Sep/Oct;41(5):1372-1382. doi: 10.1097/AUD.0000000000000862.
7
Prosody perception and production by children with cochlear implants.
J Child Lang. 2019 Jan;46(1):111-141. doi: 10.1017/S0305000918000387. Epub 2018 Oct 18.
8
Emotional Understanding in Children with A Cochlear Implant.
J Deaf Stud Deaf Educ. 2019 Apr 1;24(2):65-73. doi: 10.1093/deafed/eny031.
9
Voice Emotion Recognition by Children With Mild-to-Moderate Hearing Loss.
Ear Hear. 2019 May/Jun;40(3):477-492. doi: 10.1097/AUD.0000000000000637.
10
Are 6-month-old human infants able to transfer emotional information (happy or angry) from voices to faces? An eye-tracking study.
PLoS One. 2018 Apr 11;13(4):e0194579. doi: 10.1371/journal.pone.0194579. eCollection 2018.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验