Sierra Pacific Mental Illness Research, Education and Clinical Center (MIRECC), VA Greater Los Angeles Healthcare System, Los Angeles, CA, USA.
Schizophr Res. 2013 Feb;143(2-3):348-53. doi: 10.1016/j.schres.2012.11.025. Epub 2012 Dec 29.
Patients with schizophrenia have well-established deficits in their ability to identify emotion from facial expression and tone of voice. In the visual modality, there is strong evidence that basic processing deficits contribute to impaired facial affect recognition in schizophrenia. However, few studies have examined the auditory modality for mechanisms underlying affective prosody identification. In this study, we explored links between different stages of auditory processing, using event-related potentials (ERPs), and affective prosody detection in schizophrenia. Thirty-six schizophrenia patients and 18 healthy control subjects received tasks of affective prosody, facial emotion identification, and tone matching, as well as two auditory oddball paradigms, one passive for mismatch negativity (MMN) and one active for P300. Patients had significantly reduced MMN and P300 amplitudes, impaired auditory and visual emotion recognition, and poorer tone matching performance, relative to healthy controls. Correlations between ERP and behavioral measures within the patient group revealed significant associations between affective prosody recognition and both MMN and P300 amplitudes. These relationships were modality specific, as MMN and P300 did not correlate with facial emotion recognition. The two ERP waves accounted for 49% of the variance in affective prosody in a regression analysis. Our results support previous suggestions of a relationship between basic auditory processing abnormalities and affective prosody dysfunction in schizophrenia, and indicate that both relatively automatic pre-attentive processes (MMN) and later attention-dependent processes (P300) are involved with accurate auditory emotion identification. These findings provide support for bottom-up (e.g., perceptually based) cognitive remediation approaches.
精神分裂症患者在识别面部表情和语调中的情绪的能力方面存在明显缺陷。在视觉模态中,有强有力的证据表明,基本处理缺陷导致精神分裂症患者的面部情感识别受损。然而,很少有研究检查听觉模态中影响情感韵律识别的机制。在这项研究中,我们使用事件相关电位(ERP)探索了听觉处理的不同阶段之间的联系,以及精神分裂症患者的情感韵律检测。36 名精神分裂症患者和 18 名健康对照者接受了情感韵律、面部情感识别和语调匹配任务,以及两种听觉Oddball 范式,一种用于失匹配负波(MMN)的被动范式,一种用于 P300 的主动范式。与健康对照组相比,患者的 MMN 和 P300 振幅显著降低,听觉和视觉情感识别受损,语调匹配表现较差。在患者组内,ERP 与行为测量之间的相关性表明,情感韵律识别与 MMN 和 P300 振幅之间存在显著关联。这些关系具有模态特异性,因为 MMN 和 P300 与面部情感识别不相关。在回归分析中,这两个 ERP 波解释了情感韵律识别 49%的方差。我们的结果支持了先前关于基本听觉处理异常与精神分裂症情感韵律功能障碍之间存在关系的观点,并表明相对自动的非注意过程(MMN)和后期注意依赖过程(P300)都参与了准确的听觉情感识别。这些发现为基于自下而上的(例如,基于感知的)认知矫正方法提供了支持。