Zhang Zhengyi, Zhang Gaoyan, Zhang Yuanyuan, Liu Hong, Xu Junhai, Liu Baolin
Tianjin Key Laboratory of Cognitive Computing and Application, School of Computer Science and Technology, Tianjin University, Tianjin, 300050, People's Republic of China.
State Key Laboratory of Intelligent Technology and Systems, National Laboratory for Information Science and Technology, Tsinghua University, Beijing, 100084, People's Republic of China.
Exp Brain Res. 2017 Dec;235(12):3743-3755. doi: 10.1007/s00221-017-5086-1. Epub 2017 Sep 27.
This study aimed to investigate the functional connectivity in the brain during the cross-modal integration of polyphonic characters in Chinese audio-visual sentences. The visual sentences were all semantically reasonable and the audible pronunciations of the polyphonic characters in corresponding sentences contexts varied in four conditions. To measure the functional connectivity, correlation, coherence and phase synchronization index (PSI) were used, and then multivariate pattern analysis was performed to detect the consensus functional connectivity patterns. These analyses were confined in the time windows of three event-related potential components of P200, N400 and late positive shift (LPS) to investigate the dynamic changes of the connectivity patterns at different cognitive stages. We found that when differentiating the polyphonic characters with abnormal pronunciations from that with the appreciate ones in audio-visual sentences, significant classification results were obtained based on the coherence in the time window of the P200 component, the correlation in the time window of the N400 component and the coherence and PSI in the time window the LPS component. Moreover, the spatial distributions in these time windows were also different, with the recruitment of frontal sites in the time window of the P200 component, the frontal-central-parietal regions in the time window of the N400 component and the central-parietal sites in the time window of the LPS component. These findings demonstrate that the functional interaction mechanisms are different at different stages of audio-visual integration of polyphonic characters.
本研究旨在探讨汉语视听句子中多音字跨模态整合过程中的脑功能连接。视觉句子在语义上均合理,且相应句子语境中多音字的可听发音在四种条件下有所不同。为测量功能连接,使用了相关性、相干性和相位同步指数(PSI),然后进行多变量模式分析以检测一致的功能连接模式。这些分析限定在P200、N400和晚期正相移(LPS)这三个事件相关电位成分的时间窗口内,以研究不同认知阶段连接模式的动态变化。我们发现,在区分视听句子中发音异常的多音字和发音正确的多音字时,基于P200成分时间窗口内的相干性、N400成分时间窗口内的相关性以及LPS成分时间窗口内的相干性和PSI,获得了显著的分类结果。此外,这些时间窗口内的空间分布也不同,P200成分时间窗口内募集额叶部位,N400成分时间窗口内募集额-中央-顶叶区域,LPS成分时间窗口内募集中央-顶叶部位。这些发现表明,多音字视听整合的不同阶段功能交互机制不同。