Language and Communication Science, City University, London, UK.
Int J Lang Commun Disord. 2009 Sep-Oct;44(5):795-804. doi: 10.1080/13682820802256965.
In order to comprehend fully a speaker's intention in everyday communication, information is integrated from multiple sources, including gesture and speech. There are no published studies that have explored the impact of aphasia on iconic co-speech gesture and speech integration.
To explore the impact of aphasia on co-speech gesture and speech integration in one participant with aphasia and 20 age-matched control participants.
METHODS & PROCEDURES: The participant with aphasia and 20 control participants watched video vignettes of people producing 21 verb phrases in three different conditions, verbal only (V), gesture only (G), and verbal gesture combined (VG). Participants were required to select a corresponding picture from one of four alternatives: integration target, a verbal-only match, a gesture-only match, and an unrelated foil. The probability of choosing the integration target in the VG that goes beyond what is expected from the probabilities of choosing the integration target in V and G was referred to as multi-modal gain (MMG).
OUTCOMES & RESULTS: The participant with aphasia obtained a significantly lower multi-modal gain score than the control participants (p<0.05). Error analysis indicated that in speech and gesture integration tasks, the participant with aphasia relied on gesture in order to decode the message, whereas the control participants relied on speech in order to decode the message. Further analysis of the speech-only and gesture-only tasks indicated that the participant with aphasia had intact gesture comprehension but impaired spoken word comprehension.
CONCLUSIONS & IMPLICATIONS: The results confirm findings by Records ( 1994 ) that reported that impaired verbal comprehension leads to a greater reliance on gesture to decode messages. Moreover, multi-modal integration of information from speech and iconic gesture can be impaired in aphasia. The findings highlight the need for further exploration of the impact of aphasia on gesture and speech integration.
为了在日常交流中充分理解说话者的意图,信息是从多个来源整合而来的,包括手势和言语。目前还没有研究探讨失语症对口述伴随手势的信息整合的影响。
探索 1 名失语症患者和 20 名年龄匹配的对照组参与者对口述伴随手势的信息整合的影响。
失语症患者和 20 名对照组参与者观看了 21 个动词短语的视频片段,这些短语分别在 3 种条件下呈现:仅言语(V)、仅手势(G)和言语-手势结合(VG)。参与者需要从 4 个选项中选择一个与之对应的图片:整合目标、言语匹配、手势匹配和不相关的干扰项。将 VG 中超出预期的选择整合目标的概率与 V 和 G 中选择整合目标的概率进行比较,称为多模态增益(MMG)。
失语症患者的多模态增益得分显著低于对照组(p<0.05)。错误分析表明,在言语和手势整合任务中,失语症患者依赖手势来解码信息,而对照组参与者则依赖言语来解码信息。进一步分析言语和手势任务表明,失语症患者的手势理解能力完好,但口语理解能力受损。
结果证实了 Records(1994)的发现,即言语理解受损会导致更多地依赖手势来解码信息。此外,言语和手势信息的多模态整合在失语症中可能受损。这些发现强调了进一步探索失语症对口述伴随手势的信息整合的影响的必要性。