Suppr超能文献

失语症和言语失用症患者中言语与非言语口腔手势的视听匹配

Audio-visual matching of speech and non-speech oral gestures in patients with aphasia and apraxia of speech.

作者信息

Schmid Gabriele, Ziegler Wolfram

机构信息

EKN - Clinical Neuropsychology Research Group, Neuropsychological Department, City Hospital Bogenhausen, Dachauer Str. 164, 80992 Munich, Germany.

出版信息

Neuropsychologia. 2006;44(4):546-55. doi: 10.1016/j.neuropsychologia.2005.07.002. Epub 2005 Aug 29.

Abstract

BACKGROUND

Audio-visual speech perception mechanisms provide evidence for a supra-modal nature of phonological representations, and a link of these mechanisms to motor representations of speech has been postulated. This leads to the question if aphasic patients and patients with apraxia of speech are able to exploit the visual signal in speech perception and if implicit knowledge of audio-visual relationships is preserved in these patients. Moreover, it is unknown if the audio-visual processing of mouth movements has a specific organisation in the speech as compared to the non-speech domain.

METHODS

A discrimination task with speech and non-speech stimuli was applied in four presentation modes: auditory, visual, bimodal and cross-modal. We investigated 14 healthy persons and 14 patients with aphasia and/or apraxia of speech.

RESULTS

Patients made substantially more errors than normal subjects on both the speech and the non-speech stimuli, in all presentation modalities. Normal controls made only few errors on the speech stimuli, regardless of the presentation mode, but had a high between-subject variability in the cross-modal matching of non-speech stimuli. The patients' cross-modal processing of non-speech stimuli was mainly predicted by lower face apraxia scores, while their audio-visual matching of syllables was predicted by word repetition abilities and the presence of apraxia of speech.

CONCLUSIONS

(1) Impaired speech perception in aphasia is located at a supra-modal representational level. (2) Audio-visual processing is different for speech and non-speech oral gestures. (3) Audio-visual matching abilities in patients with left-hemisphere lesions depend on their speech and non-speech motor abilities.

摘要

背景

视听语音感知机制为语音表征的超模态性质提供了证据,并且这些机制与语音的运动表征之间的联系已被提出。这就引出了一个问题,即失语症患者和言语失用症患者在语音感知中是否能够利用视觉信号,以及这些患者是否保留了视听关系的隐性知识。此外,与非言语领域相比,口部运动的视听处理在言语中是否具有特定的组织方式尚不清楚。

方法

在四种呈现模式下应用了一项针对语音和非语音刺激的辨别任务:听觉、视觉、双峰和跨模态。我们调查了14名健康人和14名患有失语症和/或言语失用症的患者。

结果

在所有呈现模式下,患者在语音和非语音刺激上的错误都比正常受试者多得多。正常对照组在语音刺激上的错误很少,无论呈现模式如何,但在非语音刺激的跨模态匹配中受试者间差异很大。患者对非语音刺激的跨模态处理主要由较低的面部失用症评分预测,而他们对音节的视听匹配则由单词重复能力和言语失用症的存在来预测。

结论

(1)失语症中受损的语音感知位于超模态表征水平。(2)语音和非语音口部手势的视听处理是不同的。(3)左半球病变患者的视听匹配能力取决于他们的语音和非语音运动能力。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验