Department of Psychology, Western University, London, ON, Canada.
Brain and Mind Institute, Western University, London, ON, Canada.
Autism Res. 2017 Jul;10(7):1280-1290. doi: 10.1002/aur.1776. Epub 2017 Mar 24.
Speech perception in noisy environments is boosted when a listener can see the speaker's mouth and integrate the auditory and visual speech information. Autistic children have a diminished capacity to integrate sensory information across modalities, which contributes to core symptoms of autism, such as impairments in social communication. We investigated the abilities of autistic and typically-developing (TD) children to integrate auditory and visual speech stimuli in various signal-to-noise ratios (SNR). Measurements of both whole-word and phoneme recognition were recorded. At the level of whole-word recognition, autistic children exhibited reduced performance in both the auditory and audiovisual modalities. Importantly, autistic children showed reduced behavioral benefit from multisensory integration with whole-word recognition, specifically at low SNRs. At the level of phoneme recognition, autistic children exhibited reduced performance relative to their TD peers in auditory, visual, and audiovisual modalities. However, and in contrast to their performance at the level of whole-word recognition, both autistic and TD children showed benefits from multisensory integration for phoneme recognition. In accordance with the principle of inverse effectiveness, both groups exhibited greater benefit at low SNRs relative to high SNRs. Thus, while autistic children showed typical multisensory benefits during phoneme recognition, these benefits did not translate to typical multisensory benefit of whole-word recognition in noisy environments. We hypothesize that sensory impairments in autistic children raise the SNR threshold needed to extract meaningful information from a given sensory input, resulting in subsequent failure to exhibit behavioral benefits from additional sensory information at the level of whole-word recognition. Autism Res 2017. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. Autism Res 2017, 10: 1280-1290. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.
在噪声环境中,当听众可以看到说话者的嘴并整合听觉和视觉语音信息时,语音感知会得到增强。自闭症儿童整合跨感觉模式的感觉信息的能力减弱,这导致了自闭症的核心症状,例如社交沟通障碍。我们研究了自闭症和典型发育(TD)儿童在各种信噪比(SNR)下整合听觉和视觉语音刺激的能力。记录了整个单词和音素识别的测量值。在整个单词识别水平上,自闭症儿童在听觉和视听模式下的表现均降低。重要的是,自闭症儿童在多感官整合方面对整个单词识别的行为受益减少,特别是在低 SNR 下。在音素识别水平上,自闭症儿童在听觉,视觉和视听模式下的表现均低于其 TD 同龄人。但是,与整个单词识别水平的表现相反,自闭症和 TD 儿童在音素识别方面都从多感官整合中受益。根据反向有效性原则,与高 SNR 相比,两组在低 SNR 下表现出更大的收益。因此,尽管自闭症儿童在音素识别过程中表现出典型的多感官益处,但这些益处并未转化为嘈杂环境中整个单词识别的典型多感官益处。我们假设自闭症儿童的感官障碍会提高从给定感官输入中提取有意义信息所需的 SNR 阈值,从而导致在整个单词识别水平上,从其他感官信息中无法获得行为上的收益。自闭症研究 2017. 国际自闭症研究协会,威利期刊,Inc.2017. 自闭症研究 2017,10:1280-1290. 国际自闭症研究协会,威利期刊,Inc.2017.