Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, United States.
Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, United States.
Neuropsychologia. 2023 Oct 10;189:108665. doi: 10.1016/j.neuropsychologia.2023.108665. Epub 2023 Aug 22.
Real-world communication is situated in rich multimodal contexts, containing speech and gesture. Speakers often convey unique information in gesture that is not present in the speech signal (e.g., saying "He searched for a new recipe" while making a typing gesture). We examine the narrative retellings of participants with and without moderate-severe traumatic brain injury across three timepoints over two online Zoom sessions to investigate whether people with TBI can integrate information from co-occurring speech and gesture and if information from gesture persists across delays.
60 participants with TBI and 60 non-injured peers watched videos of a narrator telling four short stories. On key details, the narrator produced complementary gestures that conveyed unique information. Participants retold the stories at three timepoints: immediately after, 20-min later, and one-week later. We examined the words participants used when retelling these key details, coding them as a Speech Match (e.g., "He searched for a new recipe"), a Gesture Match (e.g., "He searched for a new recipe online), or Other ("He looked for a new recipe"). We also examined whether participants produced representative gestures themselves when retelling these details.
Despite recalling fewer story details, participants with TBI were as likely as non-injured peers to report information from gesture in their narrative retellings. All participants were more likely to report information from gesture and produce representative gestures themselves one-week later compared to immediately after hearing the story.
We demonstrated that speech-gesture integration is intact after TBI in narrative retellings. This finding has exciting implications for the utility of gesture to support comprehension and memory after TBI and expands our understanding of naturalistic multimodal language processing in this population.
现实世界的交流处于丰富的多模态语境中,包含言语和手势。说话者经常在手势中传达言语信号中没有的独特信息(例如,在做打字手势的同时说“他在寻找新的食谱”)。我们通过三个时间点的两次在线 Zoom 会议,检查了患有中重度创伤性脑损伤的参与者和无损伤参与者的叙述复述,以调查 TBI 患者是否能够整合来自并发言语和手势的信息,以及手势信息是否在延迟后仍然存在。
60 名 TBI 参与者和 60 名未受伤的同龄人观看了一位叙述者讲述四个简短故事的视频。在关键细节上,叙述者做出了互补的手势,传达了独特的信息。参与者在三个时间点复述这些故事:立即复述、20 分钟后复述和一周后复述。我们检查了参与者在复述这些关键细节时使用的单词,将其编码为言语匹配(例如,“他在寻找新的食谱”)、手势匹配(例如,“他在网上寻找新的食谱”)或其他(例如,“他在寻找新的食谱”)。我们还检查了参与者在复述这些细节时是否自己产生了代表性的手势。
尽管 TBI 患者回忆的故事细节较少,但他们在叙述复述中报告手势信息的可能性与未受伤的同龄人一样高。与立即复述相比,所有参与者在一周后复述时更有可能报告来自手势的信息,并且自己更有可能产生代表性的手势。
我们证明了 TBI 后在叙述复述中言语-手势整合是完整的。这一发现对于在 TBI 后使用手势来支持理解和记忆具有令人兴奋的意义,并扩展了我们对该人群中自然多模态语言处理的理解。