Lu Xuejing, Ho Hao T, Sun Yanan, Johnson Blake W, Thompson William F
Department of Psychology, Macquarie University, Sydney, NSW, Australia; ARC Centre of Excellence in Cognition and its Disorders, NSW, Australia.
Department of Psychology, Macquarie University, Sydney, NSW, Australia.
Neuroimage. 2016 Jul 15;135:142-51. doi: 10.1016/j.neuroimage.2016.04.043. Epub 2016 Apr 27.
While most normal hearing individuals can readily use prosodic information in spoken language to interpret the moods and feelings of conversational partners, people with congenital amusia report that they often rely more on facial expressions and gestures, a strategy that may compensate for deficits in auditory processing. In this investigation, we used EEG to examine the extent to which individuals with congenital amusia draw upon visual information when making auditory or audio-visual judgments. Event-related potentials (ERP) were elicited by a change in pitch (up or down) between two sequential tones paired with a change in spatial position (up or down) between two visually presented dots. The change in dot position was either congruent or incongruent with the change in pitch. Participants were asked to judge (1) the direction of pitch change while ignoring the visual information (AV implicit task), and (2) whether the auditory and visual changes were congruent (AV explicit task). In the AV implicit task, amusic participants performed significantly worse in the incongruent condition than control participants. ERPs showed an enhanced N2-P3 response to incongruent AV pairings for control participants, but not for amusic participants. However when participants were explicitly directed to detect AV congruency, both groups exhibited enhanced N2-P3 responses to incongruent AV pairings. These findings indicate that amusics are capable of extracting information from both modalities in an AV task, but are biased to rely on visual information when it is available, presumably because they have learned that auditory information is unreliable. We conclude that amusic individuals implicitly draw upon visual information when judging auditory information, even though they have the capacity to explicitly recognize conflicts between these two sensory channels.
虽然大多数听力正常的人能够轻松利用口语中的韵律信息来解读对话伙伴的情绪和感受,但患有先天性失歌症的人报告说,他们常常更多地依赖面部表情和手势,这种策略可能弥补了听觉处理方面的缺陷。在本研究中,我们使用脑电图来检测先天性失歌症患者在进行听觉或视听判断时利用视觉信息的程度。通过两个连续音调之间音高的变化(升高或降低)与两个视觉呈现的点之间空间位置的变化(升高或降低)配对来诱发事件相关电位(ERP)。点位置的变化与音高的变化要么一致,要么不一致。要求参与者判断:(1)音高变化的方向,同时忽略视觉信息(视听隐式任务),以及(2)听觉和视觉变化是否一致(视听显式任务)。在视听隐式任务中,失歌症参与者在不一致条件下的表现明显比对照组参与者差。ERP显示,对照组参与者对不一致的视听配对有增强的N2 - P3反应,而失歌症参与者则没有。然而,当明确指示参与者检测视听一致性时,两组对不一致的视听配对都表现出增强的N2 - P3反应。这些发现表明,失歌症患者在视听任务中能够从两种模态中提取信息,但在有视觉信息可用时倾向于依赖视觉信息,大概是因为他们已经了解到听觉信息不可靠。我们得出结论,失歌症个体在判断听觉信息时会隐性地利用视觉信息,尽管他们有能力明确识别这两种感觉通道之间的冲突。