Suppr超能文献

通过增强对嘴巴的注意力,增加自闭症患者的视听言语整合。

Increasing audiovisual speech integration in autism through enhanced attention to mouth.

机构信息

Institute for Applied Linguistics, School of Foreign Languages, Central South University, Changsha, Hunan, China.

School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China.

出版信息

Dev Sci. 2023 Jul;26(4):e13348. doi: 10.1111/desc.13348. Epub 2022 Dec 1.

Abstract

Autistic children (AC) show less audiovisual speech integration in the McGurk task, which correlates with their reduced mouth-looking time. The present study examined whether AC's less audiovisual speech integration in the McGurk task could be increased by increasing their mouth-looking time. We recruited 4- to 8-year-old AC and nonautistic children (NAC). In two experiments, we manipulated children's mouth-looking time, measured their audiovisual speech integration by employing the McGurk effect paradigm, and tracked their eye movements. In Experiment 1, we blurred the eyes in McGurk stimuli and compared children's performances in blurred-eyes and clear-eyes conditions. In Experiment 2, we cued children's attention to either the mouth or eyes of McGurk stimuli or asked them to view the McGurk stimuli freely. We found that both blurring the speaker's eyes and cuing to the speaker's mouth increased mouth-looking time and increased audiovisual speech integration in the McGurk task in AC. In addition, we found that blurring the speaker's eyes and cuing to the speaker's mouth also increased mouth-looking time in NAC, but neither blurring the speaker's eyes nor cuing to the speaker's mouth increased their audiovisual speech integration in the McGurk task. Our findings suggest that audiovisual speech integration in the McGurk task in AC could be increased by increasing their attention to the mouth. Our findings contribute to a deeper understanding of relations between face attention and audiovisual speech integration, and provide insights for the development of professional supports to increase audiovisual speech integration in AC. HIGHLIGHTS: The present study examined whether audiovisual speech integration in the McGurk task in AC could be increased by increasing their attention to the speaker's mouth. Blurring the speaker's eyes increased mouth-looking time and audiovisual speech integration in the McGurk task in AC. Cuing to the speaker's mouth also increased mouth-looking time and audiovisual speech integration in the McGurk task in AC. Audiovisual speech integration in the McGurk task in AC could be increased by increasing their attention to the speaker's mouth.

摘要

自闭症儿童(AC)在 McGurk 任务中表现出较少的视听言语整合,这与他们减少看嘴的时间有关。本研究考察了通过增加自闭症儿童看嘴的时间,是否可以增加他们在 McGurk 任务中的视听言语整合。我们招募了 4 至 8 岁的自闭症儿童(AC)和非自闭症儿童(NAC)。在两项实验中,我们通过 McGurk 效应范式操纵儿童的看嘴时间,测量他们的视听言语整合,并跟踪他们的眼球运动。在实验 1 中,我们模糊了 McGurk 刺激物中的眼睛,并比较了儿童在模糊眼和清晰眼条件下的表现。在实验 2 中,我们提示儿童注意 McGurk 刺激物的嘴巴或眼睛,或者要求他们自由观看 McGurk 刺激物。我们发现,模糊说话者的眼睛和提示说话者的嘴巴都增加了 AC 在 McGurk 任务中的看嘴时间和视听言语整合。此外,我们发现,模糊说话者的眼睛和提示说话者的嘴巴也增加了 NAC 的看嘴时间,但模糊说话者的眼睛和提示说话者的嘴巴都没有增加他们在 McGurk 任务中的视听言语整合。我们的发现表明,通过增加自闭症儿童对嘴巴的注意力,可以增加他们在 McGurk 任务中的视听言语整合。我们的研究结果有助于深入了解面部注意力与视听言语整合之间的关系,并为开发增加自闭症儿童视听言语整合的专业支持提供了见解。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验