Ahveninen Jyrki, Huang Samantha, Ahlfors Seppo P, Hämäläinen Matti, Rossi Stephanie, Sams Mikko, Jääskeläinen Iiro P
Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital/Harvard Medical School, Charlestown, MA, USA.
Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital/Harvard Medical School, Charlestown, MA, USA.
Neuroimage. 2016 Jan 1;124(Pt A):858-868. doi: 10.1016/j.neuroimage.2015.09.044. Epub 2015 Sep 28.
Spatial and non-spatial information of sound events is presumably processed in parallel auditory cortex (AC) "what" and "where" streams, which are modulated by inputs from the respective visual-cortex subsystems. How these parallel processes are integrated to perceptual objects that remain stable across time and the source agent's movements is unknown. We recorded magneto- and electroencephalography (MEG/EEG) data while subjects viewed animated video clips featuring two audiovisual objects, a black cat and a gray cat. Adaptor-probe events were either linked to the same object (the black cat meowed twice in a row in the same location) or included a visually conveyed identity change (the black and then the gray cat meowed with identical voices in the same location). In addition to effects in visual (including fusiform, middle temporal or MT areas) and frontoparietal association areas, the visually conveyed object-identity change was associated with a release from adaptation of early (50-150ms) activity in posterior ACs, spreading to left anterior ACs at 250-450ms in our combined MEG/EEG source estimates. Repetition of events belonging to the same object resulted in increased theta-band (4-8Hz) synchronization within the "what" and "where" pathways (e.g., between anterior AC and fusiform areas). In contrast, the visually conveyed identity changes resulted in distributed synchronization at higher frequencies (alpha and beta bands, 8-32Hz) across different auditory, visual, and association areas. The results suggest that sound events become initially linked to perceptual objects in posterior AC, followed by modulations of representations in anterior AC. Hierarchical what and where pathways seem to operate in parallel after repeating audiovisual associations, whereas the resetting of such associations engages a distributed network across auditory, visual, and multisensory areas.
声音事件的空间和非空间信息可能在平行的听觉皮层(AC)“什么”和“哪里”通路中进行处理,这些通路由来自各自视觉皮层子系统的输入进行调制。目前尚不清楚这些并行过程是如何整合为在时间上保持稳定且不受源代理运动影响的感知对象的。我们记录了受试者观看包含两只视听对象(一只黑猫和一只灰猫)的动画视频片段时的脑磁图和脑电图(MEG/EEG)数据。适应-探测事件要么与同一对象相关联(黑猫在同一位置连续叫了两声),要么包括视觉传达的身份变化(黑猫然后灰猫在同一位置用相同的声音叫)。除了在视觉区域(包括梭状回、颞中回或MT区)和额顶叶联合区域产生的效应外,视觉传达的对象身份变化与后AC早期(50 - 150毫秒)活动的适应解除相关,在我们的联合MEG/EEG源估计中,这种解除在250 - 450毫秒时扩散到左前AC。属于同一对象的事件重复会导致“什么”和“哪里”通路内(例如,前AC和梭状回区域之间)的θ波段(4 - 8赫兹)同步增加。相比之下,视觉传达的身份变化会导致在不同听觉区域、视觉区域和联合区域出现更高频率(α和β波段,8 - 32赫兹)的分布式同步。结果表明,声音事件最初在后AC中与感知对象建立联系,随后前AC中的表征会受到调制。在重复视听关联后,分层的“什么”和“哪里”通路似乎并行运作,而这种关联的重置则涉及一个跨越听觉、视觉和多感官区域的分布式网络。