Shehabi Sana, Comstock Daniel C, Mankel Kelsey, Bormann Brett M, Das Soukhin, Brodie Hilary, Sagiv Doron, Miller Lee M
Center for Mind and Brain, University of California, Davis, Davis, California 95618.
Institute for Intelligent Systems, University of Memphis, Memphis, Tennessee 38152.
eNeuro. 2025 Apr 29;12(4). doi: 10.1523/ENEURO.0381-24.2025. Print 2025 Apr.
Individuals with normal hearing exhibit considerable variability in their capacity to understand speech in noisy environments. Previous research suggests the cause of this variance may be due to individual differences in cognition and auditory perception. To investigate the impact of cognitive and perceptual differences on speech comprehension, 25 adult human participants with normal hearing completed numerous cognitive and psychoacoustic tasks including the Flanker, Stroop, Trail Making, reading span, and temporal fine structure tests. They also completed a continuous multitalker spatial attention task while neural activity was recorded using electroencephalography. The auditory cortical N1 response was extracted as a measure of neural speech encoding during continuous speech listening using an engineered "chirped-speech" (Cheech) stimulus. We compared N1 component morphologies of target and masker speech stimuli to assess neural correlates of attentional gains while listening to concurrently played short story narratives. Performance on cognitive and psychoacoustic tasks was used to predict N1 component amplitude differences between attended and unattended speech using multiple regression. Results show inhibitory control and working memory abilities can predict N1 amplitude differences between the target and masker stories. Interestingly, none of the cognitive and psychoacoustic predictors correlated with behavioral speech-in-noise listening performance in the attention task, suggesting that neural measures may capture different aspects of cognitive and auditory processing compared with behavioral measures alone.
听力正常的个体在嘈杂环境中理解言语的能力存在很大差异。先前的研究表明,这种差异的原因可能是认知和听觉感知方面的个体差异。为了研究认知和感知差异对言语理解的影响,25名听力正常的成年人类参与者完成了多项认知和心理声学任务,包括侧翼任务、斯特鲁普任务、连线测验、阅读广度和时间精细结构测试。他们还完成了一项连续的多说话者空间注意力任务,同时使用脑电图记录神经活动。在连续言语聆听过程中,使用一种设计的“啁啾语音”(Cheech)刺激提取听觉皮层N1反应,作为神经言语编码的一种测量方法。我们比较了目标和掩蔽语音刺激的N1成分形态,以评估在同时播放短篇小说叙述时注意力增益的神经相关性。使用多元回归,将认知和心理声学任务的表现用于预测被注意和未被注意言语之间的N1成分幅度差异。结果表明,抑制控制和工作记忆能力可以预测目标和掩蔽故事之间的N1幅度差异。有趣的是,在注意力任务中,没有一个认知和心理声学预测指标与行为上的噪声中言语聆听表现相关,这表明与单独的行为测量相比,神经测量可能捕捉到认知和听觉处理的不同方面。