Trudeau-Fisette Paméla, Ito Takayuki, Ménard Lucie
Laboratoire de Phonétique, Université du Québec à Montréal, Montreal, QC, Canada.
Centre for Research on Brain, Language and Music, Montreal, QC, Canada.
Front Hum Neurosci. 2019 Oct 4;13:344. doi: 10.3389/fnhum.2019.00344. eCollection 2019.
Multisensory integration (MSI) allows us to link sensory cues from multiple sources and plays a crucial role in speech development. However, it is not clear whether humans have an innate ability or whether repeated sensory input while the brain is maturing leads to efficient integration of sensory information in speech. We investigated the integration of auditory and somatosensory information in speech processing in a bimodal perceptual task in 15 young adults (age 19-30) and 14 children (age 5-6). The participants were asked to identify if the perceived target was the sound /e/ or /ø/. Half of the stimuli were presented under a unimodal condition with only auditory input. The other stimuli were presented under a bimodal condition with both auditory input and somatosensory input consisting of facial skin stretches provided by a robotic device, which mimics the articulation of the vowel /e/. The results indicate that the effect of somatosensory information on sound categorization was larger in adults than in children. This suggests that integration of auditory and somatosensory information evolves throughout the course of development.
多感官整合(MSI)使我们能够将来自多个来源的感官线索联系起来,并在语音发展中发挥关键作用。然而,目前尚不清楚人类是具有先天能力,还是大脑在发育过程中反复的感官输入导致了语音中感官信息的有效整合。我们在一项双峰感知任务中,对15名年轻人(19 - 30岁)和14名儿童(5 - 6岁)的语音处理中听觉和体感信息的整合进行了研究。参与者被要求识别所感知的目标是声音/e/还是/ø/。一半的刺激在单峰条件下呈现,仅提供听觉输入。另一半刺激在双峰条件下呈现,同时提供听觉输入和体感输入,体感输入由一个机器人设备提供的面部皮肤拉伸组成,该设备模仿元音/e/的发音。结果表明,体感信息对声音分类的影响在成年人中比在儿童中更大。这表明听觉和体感信息的整合在整个发育过程中不断发展。