Department of Communication Sciences and Disorders, Baylor University, Waco, Texas.
Division of Otolaryngology, Baylor Scott & White Medical Center, Temple, Texas.
J Am Acad Audiol. 2021 Sep;32(8):521-527. doi: 10.1055/s-0041-1731699. Epub 2021 Dec 29.
Cochlear implant technology allows for acoustic and electric stimulations to be combined across ears (bimodal) and within the same ear (electric acoustic stimulation [EAS]). Mechanisms used to integrate speech acoustics may be different between the bimodal and EAS hearing, and the configurations of hearing loss might be an important factor for the integration. Thus, differentiating the effects of different configurations of hearing loss on bimodal or EAS benefit in speech perception (differences in performance with combined acoustic and electric stimulations from a better stimulation alone) is important.
Using acoustic simulation, we determined how consonant recognition was affected by different configurations of hearing loss in bimodal and EAS hearing.
A mixed design was used with one between-subject variable (simulated bimodal group vs. simulated EAS group) and one within-subject variable (acoustic stimulation alone, electric stimulation alone, and combined acoustic and electric stimulations).
Twenty adult subjects (10 for each group) with normal hearing were recruited.
Consonant perception was unilaterally or bilaterally measured in quiet. For the acoustic stimulation, four different simulations of hearing loss were created by band-pass filtering consonants with a fixed lower cutoff frequency of 100 Hz and each of the four upper cutoff frequencies of 250, 500, 750, and 1,000 Hz. For the electric stimulation, an eight-channel noise vocoder was used to generate a typical spectral mismatch by using fixed input (200-7,000 Hz) and output (1,000-7,000 Hz) frequency ranges. The effects of simulated hearing loss on consonant recognition were compared between the two groups.
Significant bimodal and EAS benefits occurred regardless of the configurations of hearing loss and hearing technology (bimodal vs. EAS). Place information was better transmitted in EAS hearing than in bimodal hearing.
These results suggest that configurations of hearing loss are not a significant factor for integrating consonant information between acoustic and electric stimulations. The results also suggest that mechanisms used to integrate consonant information may be similar between bimodal and EAS hearing.
人工耳蜗技术允许在双耳(双模式)和同一耳内(电声刺激[EAS])结合声学和电刺激。在双模式和 EAS 听力中,用于整合语音声学的机制可能不同,听力损失的配置可能是整合的一个重要因素。因此,区分不同听力损失配置对语音感知中双模式或 EAS 益处的影响(结合声学和电刺激的性能与仅更好刺激的差异)非常重要。
使用声学模拟,我们确定了双模式和 EAS 听力中不同听力损失配置如何影响辅音识别。
使用混合设计,一个是被试间变量(模拟双模式组与模拟 EAS 组),一个是被试内变量(单独声学刺激、单独电刺激和联合声学和电刺激)。
招募了 20 名听力正常的成年受试者(每组 10 名)。
在安静环境中单侧或双侧测量辅音感知。对于声学刺激,通过带通滤波将辅音固定下限截止频率为 100Hz 且每个上限截止频率为 250、500、750 和 1000Hz 的方式创建了四种不同的听力损失模拟。对于电刺激,使用八通道噪声声码器通过使用固定输入(200-7000Hz)和输出(1000-7000Hz)频率范围产生典型的频谱失配。比较了两组之间模拟听力损失对辅音识别的影响。
无论听力损失和听力技术的配置如何(双模式与 EAS),都出现了显著的双模式和 EAS 益处。EAS 听力比双模式听力更好地传输位置信息。
这些结果表明,听力损失的配置不是整合声学和电刺激之间辅音信息的重要因素。结果还表明,在双模式和 EAS 听力中,用于整合辅音信息的机制可能相似。