Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA.
Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA; University of Tennessee Health Sciences Center, Department of Anatomy and Neurobiology, Memphis, TN, USA.
Brain Res. 2021 May 15;1759:147385. doi: 10.1016/j.brainres.2021.147385. Epub 2021 Feb 23.
Speech perception requires the grouping of acoustic information into meaningful phonetic units via the process of categorical perception (CP). Environmental masking influences speech perception and CP. However, it remains unclear at which stage of processing (encoding, decision, or both) masking affects listeners' categorization of speech signals. The purpose of this study was to determine whether linguistic interference influences the early acoustic-phonetic conversion process inherent to CP. To this end, we measured source level, event related brain potentials (ERPs) from auditory cortex (AC) and inferior frontal gyrus (IFG) as listeners rapidly categorized speech sounds along a /da/ to /ga/ continuum presented in three listening conditions: quiet, and in the presence of forward (informational masker) and time-reversed (energetic masker) 2-talker babble noise. Maskers were matched in overall SNR and spectral content and thus varied only in their degree of linguistic interference (i.e., informational masking). We hypothesized a differential effect of informational versus energetic masking on behavioral and neural categorization responses, where we predicted increased activation of frontal regions when disambiguating speech from noise, especially during lexical-informational maskers. We found (1) informational masking weakens behavioral speech phoneme identification above and beyond energetic masking; (2) low-level AC activity not only codes speech categories but is susceptible to higher-order lexical interference; (3) identifying speech amidst noise recruits a cross hemispheric circuit (AC → IFG) whose engagement varies according to task difficulty. These findings provide corroborating evidence for top-down influences on the early acoustic-phonetic analysis of speech through a coordinated interplay between frontotemporal brain areas.
言语感知需要通过范畴感知(CP)的过程将声学信息分组为有意义的语音单位。环境掩蔽会影响言语感知和 CP。然而,掩蔽在处理的哪个阶段(编码、决策还是两者兼而有之)影响听众对言语信号的分类仍然不清楚。本研究的目的是确定语言干扰是否会影响 CP 固有的早期声学语音转换过程。为此,我们测量了声源水平、听觉皮层(AC)和下额前回(IFG)的事件相关脑电位(ERPs),当听众在三种聆听条件下快速沿着 /da/ 到 /ga/ 连续体对语音进行分类时:安静、存在前向(信息掩蔽)和时间反转(能量掩蔽)2 人闲聊噪声。掩蔽器在总 SNR 和频谱内容上匹配,因此仅在其语言干扰程度上有所不同(即信息掩蔽)。我们假设信息掩蔽与能量掩蔽对行为和神经分类反应有不同的影响,我们预测当从噪声中区分语音时,额叶区域的激活会增加,尤其是在词汇信息掩蔽器中。我们发现:(1)信息掩蔽在能量掩蔽之上削弱了行为语音音位识别;(2)AC 的低水平活动不仅对语音类别进行编码,而且易受高阶词汇干扰;(3)在噪声中识别语音会招募一个跨半球回路(AC→IFG),其参与度根据任务难度而变化。这些发现为通过额颞叶脑区的协调相互作用对言语的早期声学语音分析产生自上而下的影响提供了佐证。