Spratford Meredith, McLean Hannah Hodson, McCreery Ryan
Audibility, Perception and Cognition Laboratory, Boys Town National Research Hospital, Omaha, NE.
Eastern Virginia Medical School, Norfolk, VA.
J Am Acad Audiol. 2017 Oct;28(9):799-809. doi: 10.3766/jaaa.16151.
Access to aided high-frequency speech information is currently assessed behaviorally using recognition of plural monosyllabic words. Because of semantic and grammatical cues that support word+morpheme recognition in sentence materials, the contribution of high-frequency audibility to sentence recognition is less than that for isolated words. However, young children may not yet have the linguistic competence to take advantage of these cues. A low-predictability sentence recognition task that controls for language ability could be used to assess the impact of high-frequency audibility in a context that more closely represents how children learn language.
To determine if differences exist in recognition of s/z-inflected monosyllabic words for children with normal hearing (CNH) and children who are hard of hearing (CHH) across stimuli context (presented in isolation versus embedded medially within a sentence that has low semantic and syntactic predictability) and varying levels of high-frequency audibility (4- and 8-kHz low-pass filtered for CNH and 8-kHz low-pass filtered for CHH).
A prospective, cross-sectional design was used to analyze word+morpheme recognition in noise for stimuli varying in grammatical context and high-frequency audibility. Low-predictability sentence stimuli were created so that the target word+morpheme could not be predicted by semantic or syntactic cues. Electroacoustic measures of aided access to high-frequency speech sounds were used to predict individual differences in recognition for CHH.
Thirty-five children, aged 5-12 yrs, were recruited to participate in the study; 24 CNH and 11 CHH (bilateral mild to severe hearing loss) who wore hearing aids (HAs). All children were native speakers of English.
Monosyllabic word+morpheme recognition was measured in isolated and sentence-embedded conditions at a +10 dB signal-to-noise ratio using steady state, speech-shaped noise. Real-ear probe microphone measures of HAs were obtained for CHH. To assess the effects of high-frequency audibility on word+morpheme recognition for CNH, a repeated-measures ANOVA was used with bandwidth (8 kHz, 4 kHz) and context (isolated, sentence embedded) as within-subjects factors. To compare recognition between CNH and CHH, a mixed-model ANOVA was completed with context (isolated, sentence-embedded) as a within-subjects factor and hearing status as a between-subjects factor. Bivariate correlations between word+morpheme recognition scores and electroacoustic measures of high-frequency audibility were used to assess which measures might be sensitive to differences in perception for CHH.
When high-frequency audibility was maximized, CNH and CHH had better word+morpheme recognition in the isolated condition compared with sentence-embedded. When high-frequency audibility was limited, CNH had better word+morpheme recognition in the sentence-embedded condition compared with the isolated condition. CHH whose HAs had greater high-frequency speech bandwidth, as measured by the maximum audible frequency, had better word+morpheme recognition in sentences.
High-frequency audibility supports word+morpheme recognition within low-predictability sentences for both CNH and CHH. Maximum audible frequency can be used to estimate word+morpheme recognition for CHH. Low-predictability sentences that do not contain semantic or grammatical context may be of clinical use in estimating children's use of high-frequency audibility in a manner that approximates how they learn language.
目前,借助高频语音信息的能力是通过对多个单音节词的识别进行行为评估的。由于句子材料中支持单词+语素识别的语义和语法线索,高频可听度对句子识别的贡献小于对孤立单词的贡献。然而,幼儿可能尚未具备利用这些线索的语言能力。一种控制语言能力的低可预测性句子识别任务可用于评估高频可听度在更接近儿童学习语言方式的情境中的影响。
确定正常听力儿童(CNH)和听力障碍儿童(CHH)在不同刺激情境(孤立呈现与嵌入语义和句法可预测性较低的句子中间)以及不同高频可听度水平(CNH进行4千赫和8千赫低通滤波,CHH进行8千赫低通滤波)下对s/z屈折单音节词的识别是否存在差异。
采用前瞻性横断面设计,分析在语法情境和高频可听度不同的刺激下,单词+语素在噪声中的识别情况。创建低可预测性句子刺激,使目标单词+语素无法通过语义或句法线索预测。使用辅助获取高频语音的电声学测量方法来预测CHH识别方面的个体差异。
招募了35名5至12岁的儿童参与研究;其中24名CNH儿童和11名CHH儿童(双侧轻度至重度听力损失)佩戴助听器(HA)。所有儿童均以英语为母语。
在+10分贝信噪比下使用稳态、言语形状噪声,在孤立和句子嵌入条件下测量单音节单词+语素识别。对CHH儿童获取HA的真耳探头麦克风测量数据。为评估高频可听度对CNH儿童单词+语素识别的影响,使用重复测量方差分析,将带宽(8千赫、4千赫)和情境(孤立、句子嵌入)作为被试内因素。为比较CNH和CHH儿童之间的识别情况,完成混合模型方差分析,将情境(孤立、句子嵌入)作为被试内因素,听力状况作为被试间因素。使用单词+语素识别分数与高频可听度的电声学测量之间的双变量相关性来评估哪些测量可能对CHH儿童的感知差异敏感。
当高频可听度最大化时,与句子嵌入条件相比,CNH和CHH儿童在孤立条件下的单词+语素识别更好。当高频可听度受到限制时,与孤立条件相比,CNH儿童在句子嵌入条件下的单词+语素识别更好。通过最大可听频率测量,HA具有更大高频语音带宽的CHH儿童在句子中的单词+语素识别更好。
高频可听度支持CNH和CHH儿童在低可预测性句子中的单词+语素识别。最大可听频率可用于估计CHH儿童的单词+语素识别。不包含语义或语法情境的低可预测性句子在以接近儿童学习语言方式估计儿童对高频可听度的利用方面可能具有临床用途。