言语感知训练中反复暴露于口面部体感输入会调节言语感知中的元音分类。
Repetitive Exposure to Orofacial Somatosensory Inputs in Speech Perceptual Training Modulates Vowel Categorization in Speech Perception.
作者信息
Ito Takayuki, Ogane Rintaro
机构信息
Univ. Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, Grenoble, France.
Haskins Laboratories, New Haven, CT, United States.
出版信息
Front Psychol. 2022 Apr 26;13:839087. doi: 10.3389/fpsyg.2022.839087. eCollection 2022.
Orofacial somatosensory inputs may play a role in the link between speech perception and production. Given the fact that speech motor learning, which involves paired auditory and somatosensory inputs, results in changes to speech perceptual representations, somatosensory inputs may also be involved in learning or adaptive processes of speech perception. Here we show that repetitive pairing of somatosensory inputs and sounds, such as occurs during speech production and motor learning, can also induce a change of speech perception. We examined whether the category boundary between /ε/ and /a/ was changed as a result of perceptual training with orofacial somatosensory inputs. The experiment consisted of three phases: Baseline, Training, and Aftereffect. In all phases, a vowel identification test was used to identify the perceptual boundary between /ε/ and /a/. In the Baseline and the Aftereffect phase, an adaptive method based on the maximum-likelihood procedure was applied to detect the category boundary using a small number of trials. In the Training phase, we used the method of constant stimuli in order to expose participants to stimulus variants which covered the range between /ε/ and /a/ evenly. In this phase, to mimic the sensory input that accompanies speech production and learning in an experimental group, somatosensory stimulation was applied in the upward direction when the stimulus sound was presented. A control group (CTL) followed the same training procedure in the absence of somatosensory stimulation. When we compared category boundaries prior to and following paired auditory-somatosensory training, the boundary for participants in the experimental group reliably changed in the direction of /ε/, indicating that the participants perceived /a/ more than /ε/ as a consequence of training. In contrast, the CTL did not show any change. Although a limited number of participants were tested, the perceptual shift was reduced and almost eliminated 1 week later. Our data suggest that repetitive exposure of somatosensory inputs in a task that simulates the sensory pairing which occurs during speech production, changes perceptual system and supports the idea that somatosensory inputs play a role in speech perceptual adaptation, probably contributing to the formation of sound representations for speech perception.
口面部体感输入可能在言语感知与产生之间的联系中发挥作用。鉴于言语运动学习涉及听觉和体感输入配对,会导致言语感知表征发生变化,体感输入也可能参与言语感知的学习或适应性过程。在此我们表明,体感输入与声音的重复配对,比如在言语产生和运动学习过程中发生的情况,也能引起言语感知的变化。我们研究了/ε/和/a/之间的类别边界是否因口面部体感输入的感知训练而改变。实验包括三个阶段:基线阶段、训练阶段和后效阶段。在所有阶段,都使用元音识别测试来确定/ε/和/a/之间的感知边界。在基线阶段和后效阶段,采用基于最大似然程序的自适应方法,通过少量试验来检测类别边界。在训练阶段,我们使用恒定刺激法,以使参与者均匀接触覆盖/ε/和/a/范围的刺激变体。在这个阶段,为了模拟实验组中伴随言语产生和学习的感觉输入,在呈现刺激声音时向上施加体感刺激。对照组(CTL)在没有体感刺激的情况下遵循相同的训练程序。当我们比较听觉 - 体感配对训练前后的类别边界时,实验组参与者的边界可靠地向/ε/方向变化,表明参与者由于训练将/a/感知为比/ε/更多。相比之下,CTL组没有显示出任何变化。尽管测试的参与者数量有限,但1周后感知偏移减少并几乎消除。我们的数据表明,在模拟言语产生过程中发生的感觉配对的任务中,体感输入的重复暴露会改变感知系统,并支持体感输入在言语感知适应中发挥作用的观点,这可能有助于形成用于言语感知的声音表征。