Suppr超能文献

嘴唇错位:自闭症中视听语言的发作异步检测

Slipped lips: onset asynchrony detection of auditory-visual language in autism.

作者信息

Grossman Ruth B, Schneps Matthew H, Tager-Flusberg Helen

机构信息

Lab of Developmental Cognitive Neuroscience, Boston University School of Medicine, Boston, Massachusetts 02118, USA.

出版信息

J Child Psychol Psychiatry. 2009 Apr;50(4):491-7. doi: 10.1111/j.1469-7610.2008.02002.x. Epub 2008 Dec 17.

Abstract

BACKGROUND

It has frequently been suggested that individuals with autism spectrum disorder (ASD) have deficits in auditory-visual (AV) sensory integration. Studies of language integration have mostly used non-word syllables presented in congruent and incongruent AV combinations and demonstrated reduced influence of visual speech in individuals with ASD. The aim of our study was to test whether adolescents with high-functioning autism are able to integrate AV information of meaningful, phrase-length language in a task of onset asynchrony detection.

METHODS

Participants were 25 adolescents with ASD and 25 typically developing (TD) controls. The stimuli were video clips of complete phrases using simple, commonly occurring words. The clips were digitally manipulated to have the video precede the corresponding audio by 0, 4, 6, 8, 10, 12, or 14 video frames, a range of 0-500ms. Participants were shown the video clips in random order and asked to indicate whether each clip was in-synch or not.

RESULTS

There were no differences between adolescents with ASD and their TD peers in accuracy of onset asynchrony detection at any slip rate.

CONCLUSION

These data indicate that adolescents with ASD are able to integrate auditory and visual components in a task of onset asynchrony detection using natural, phrase-length language stimuli. We propose that the meaningful nature of the language stimuli in combination with presentation in a non-distracting environment allowed adolescents with autism spectrum disorder to demonstrate preserved accuracy for bi-modal AV integration.

摘要

背景

人们经常认为,自闭症谱系障碍(ASD)患者在视听(AV)感觉统合方面存在缺陷。语言统合研究大多使用以一致和不一致的视听组合呈现的非单词音节,并证明ASD患者视觉言语的影响减弱。我们研究的目的是测试高功能自闭症青少年在起始异步检测任务中是否能够整合有意义的、短语长度语言的视听信息。

方法

参与者为25名患有ASD的青少年和25名发育正常(TD)的对照组。刺激物是使用简单常见单词的完整短语的视频片段。这些片段经过数字处理,使视频比相应音频提前0、4、6、8、10、12或14个视频帧,范围为0 - 500毫秒。以随机顺序向参与者展示视频片段,并要求他们指出每个片段是否同步。

结果

在任何延迟率下,患有ASD的青少年与其TD同龄人在起始异步检测准确性方面均无差异。

结论

这些数据表明,患有ASD的青少年能够在使用自然的、短语长度语言刺激的起始异步检测任务中整合听觉和视觉成分。我们认为,语言刺激的有意义性质与在无干扰环境中的呈现相结合,使自闭症谱系障碍青少年在双模式视听整合方面表现出保留的准确性。

相似文献

1
Slipped lips: onset asynchrony detection of auditory-visual language in autism.嘴唇错位:自闭症中视听语言的发作异步检测
J Child Psychol Psychiatry. 2009 Apr;50(4):491-7. doi: 10.1111/j.1469-7610.2008.02002.x. Epub 2008 Dec 17.
7
Audiovisual speech integration and lipreading in autism.自闭症中的视听言语整合与唇读
J Child Psychol Psychiatry. 2007 Aug;48(8):813-21. doi: 10.1111/j.1469-7610.2007.01766.x.

引用本文的文献

8
Links between temporal acuity and multisensory integration across life span.一生中时间敏锐度与多感官整合之间的联系。
J Exp Psychol Hum Percept Perform. 2018 Jan;44(1):106-116. doi: 10.1037/xhp0000424. Epub 2017 Apr 27.
10
A novel behavioral paradigm to assess multisensory processing in mice.一种评估小鼠多感官处理能力的新型行为范式。
Front Behav Neurosci. 2015 Jan 12;8:456. doi: 10.3389/fnbeh.2014.00456. eCollection 2014.

本文引用的文献

1
Crossmodal identification.跨模态识别。
Trends Cogn Sci. 1998 Jul 1;2(7):247-53. doi: 10.1016/S1364-6613(98)01189-9.
5
Audiovisual speech integration and lipreading in autism.自闭症中的视听言语整合与唇读
J Child Psychol Psychiatry. 2007 Aug;48(8):813-21. doi: 10.1111/j.1469-7610.2007.01766.x.
8
Temporal window of integration in auditory-visual speech perception.视听言语感知中的整合时间窗。
Neuropsychologia. 2007 Feb 1;45(3):598-607. doi: 10.1016/j.neuropsychologia.2006.01.001. Epub 2006 Mar 10.

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验