Suppr超能文献

自闭症高风险婴儿的非典型视听言语整合。

Atypical audiovisual speech integration in infants at risk for autism.

机构信息

Centre for Brain and Cognitive Development, Department of Psychological Science, Birkbeck, University of London, London, United Kingdom.

出版信息

PLoS One. 2012;7(5):e36428. doi: 10.1371/journal.pone.0036428. Epub 2012 May 15.

Abstract

The language difficulties often seen in individuals with autism might stem from an inability to integrate audiovisual information, a skill important for language development. We investigated whether 9-month-old siblings of older children with autism, who are at an increased risk of developing autism, are able to integrate audiovisual speech cues. We used an eye-tracker to record where infants looked when shown a screen displaying two faces of the same model, where one face is articulating/ba/and the other/ga/, with one face congruent with the syllable sound being presented simultaneously, the other face incongruent. This method was successful in showing that infants at low risk can integrate audiovisual speech: they looked for the same amount of time at the mouths in both the fusible visual/ga/- audio/ba/and the congruent visual/ba/- audio/ba/displays, indicating that the auditory and visual streams fuse into a McGurk-type of syllabic percept in the incongruent condition. It also showed that low-risk infants could perceive a mismatch between auditory and visual cues: they looked longer at the mouth in the mismatched, non-fusible visual/ba/- audio/ga/display compared with the congruent visual/ga/- audio/ga/display, demonstrating that they perceive an uncommon, and therefore interesting, speech-like percept when looking at the incongruent mouth (repeated ANOVA: displays x fusion/mismatch conditions interaction: F(1,16) = 17.153, p = 0.001). The looking behaviour of high-risk infants did not differ according to the type of display, suggesting difficulties in matching auditory and visual information (repeated ANOVA, displays x conditions interaction: F(1,25) = 0.09, p = 0.767), in contrast to low-risk infants (repeated ANOVA: displays x conditions x low/high-risk groups interaction: F(1,41) = 4.466, p = 0.041). In some cases this reduced ability might lead to the poor communication skills characteristic of autism.

摘要

自闭症个体中常见的语言障碍可能源于他们无法整合视听信息,而这种信息整合能力对语言发展至关重要。我们研究了自闭症大龄儿童的 9 个月大的兄弟姐妹是否能够整合视听言语线索,因为他们有更高的自闭症发病风险。我们使用眼动追踪仪来记录婴儿在观看屏幕时的目光位置,屏幕上显示的是同一个模特的两张脸,一张脸在发/ba/音,另一张脸发/ga/音,其中一张脸与呈现的音节声音一致,另一张脸不一致。该方法成功地表明,低风险婴儿能够整合视听言语信息:他们在融合的视觉/ga/-听觉/ba/和一致的视觉/ba/-听觉/ba/显示中,观看两张脸的时间一样长,这表明在不一致的条件下,听觉和视觉流融合成一种麦格克型音节感知。它还表明,低风险婴儿能够感知到听觉和视觉线索之间的不匹配:他们在不匹配、不可融合的视觉/ba/-听觉/ga/显示中,观看嘴巴的时间比在一致的视觉/ga/-听觉/ga/显示中更长,这表明他们在观看不一致的嘴巴时感知到一种不常见的、因此有趣的类似言语的感知(重复方差分析:显示 x 融合/不匹配条件交互:F(1,16) = 17.153,p = 0.001)。高风险婴儿的注视行为并没有根据显示类型而有所不同,这表明他们在匹配听觉和视觉信息方面存在困难(重复方差分析,显示 x 条件交互:F(1,25) = 0.09,p = 0.767),这与低风险婴儿形成对比(重复方差分析:显示 x 条件 x 低/高风险组交互:F(1,41) = 4.466,p = 0.041)。在某些情况下,这种能力的下降可能导致自闭症患者沟通能力较差。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/58be/3352915/bfbbbcf5114d/pone.0036428.g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验