ACTE.
ULB Neuroscience Institute.
J Exp Psychol Gen. 2021 Oct;150(10):2137-2157. doi: 10.1037/xge0001040. Epub 2021 Jun 17.
Low integration of speech sounds with the mouth movements likely contributes to language acquisition disabilities that frequently characterize young autistic children. However, the existing empirical evidence either relies on complex verbal instructions or merely focuses on preferential gaze on in-synch videos. The former method is clearly unadapted for young, minimally, or nonverbal autistic children, while the latter has several biases, making it difficult to interpret the data. We designed a Reinforced Preferential Gaze paradigm that allows to test multimodal integration in young, nonverbal autistic children and overcomes several of the methodological challenges faced by previous studies. We show that autistic children have difficulties in temporally binding the speech signal with the corresponding articulatory gestures. A condition with structurally similar nonsocial video stimuli suggests that atypical multimodal integration in autism is not limited to speech stimuli. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
言语声音与口部运动的低整合可能导致语言习得障碍,这在患有自闭症的幼儿中很常见。然而,现有的实证证据要么依赖于复杂的口头指令,要么仅仅关注与同步视频的优先注视。前者的方法显然不适合年幼的、轻度的或非语言的自闭症儿童,而后者存在多种偏差,使得数据难以解释。我们设计了一种强化偏好注视范式,可以测试年幼的非语言自闭症儿童的多模态整合,并克服了之前研究中面临的几个方法学挑战。我们发现自闭症儿童在将言语信号与相应的发音动作进行时间绑定方面存在困难。在具有结构相似的非社交视频刺激的条件下,提示自闭症患者的异常多模态整合并不仅限于言语刺激。