van Ackeren Markus J, Rueschemeyer Shirley-Ann
Department of Psychology, University of York, York, United Kingdom.
PLoS One. 2014 Jul 9;9(7):e101042. doi: 10.1371/journal.pone.0101042. eCollection 2014.
In recent years, numerous studies have provided converging evidence that word meaning is partially stored in modality-specific cortical networks. However, little is known about the mechanisms supporting the integration of this distributed semantic content into coherent conceptual representations. In the current study we aimed to address this issue by using EEG to look at the spatial and temporal dynamics of feature integration during word comprehension. Specifically, participants were presented with two modality-specific features (i.e., visual or auditory features such as silver and loud) and asked to verify whether these two features were compatible with a subsequently presented target word (e.g., WHISTLE). Each pair of features described properties from either the same modality (e.g., silver, tiny = visual features) or different modalities (e.g., silver, loud = visual, auditory). Behavioral and EEG data were collected. The results show that verifying features that are putatively represented in the same modality-specific network is faster than verifying features across modalities. At the neural level, integrating features across modalities induces sustained oscillatory activity around the theta range (4-6 Hz) in left anterior temporal lobe (ATL), a putative hub for integrating distributed semantic content. In addition, enhanced long-range network interactions in the theta range were seen between left ATL and a widespread cortical network. These results suggest that oscillatory dynamics in the theta range could be involved in integrating multimodal semantic content by creating transient functional networks linking distributed modality-specific networks and multimodal semantic hubs such as left ATL.
近年来,大量研究提供了越来越多的证据表明词义部分存储在特定模态的皮层网络中。然而,对于支持将这种分布式语义内容整合为连贯概念表征的机制,我们却知之甚少。在当前的研究中,我们旨在通过使用脑电图来观察单词理解过程中特征整合的空间和时间动态,以解决这个问题。具体而言,向参与者呈现两个特定模态的特征(即视觉或听觉特征,如银色和响亮),并要求他们验证这两个特征是否与随后呈现的目标词(如口哨)相匹配。每对特征描述的是来自同一模态(如银色、微小 = 视觉特征)或不同模态(如银色、响亮 = 视觉、听觉)的属性。我们收集了行为和脑电图数据。结果表明,验证假定存储在同一特定模态网络中的特征比验证跨模态特征更快。在神经层面,跨模态整合特征会在左前颞叶(ATL)诱导出围绕θ频段(4 - 6赫兹)的持续振荡活动,左前颞叶被认为是整合分布式语义内容的枢纽。此外,在左前颞叶和广泛的皮层网络之间观察到了θ频段增强的长程网络交互作用。这些结果表明,θ频段的振荡动态可能通过创建连接分布式特定模态网络和多模态语义枢纽(如左前颞叶)的瞬态功能网络,参与多模态语义内容的整合。