School of Computer Science and Technology, Changchun University of Science and Technology, Changchun 130022, PR China; School of Computer Science, Northeast Electric Power University, Jilin 132012, PR China.
School of Computer Science and Technology, Changchun University of Science and Technology, Changchun 130022, PR China.
Int J Psychophysiol. 2020 May;151:7-17. doi: 10.1016/j.ijpsycho.2020.02.009. Epub 2020 Feb 20.
The integration of multisensory objects containing semantic information involves processing of both low-level co-stimulation and high-order semantic integration. To investigate audiovisual semantic integration, we utilized bimodal stimuli (AV, simultaneous presentation of an auditory sound and a visual picture; An, simultaneous presentation of an auditory sound and a visual noise; Vn, simultaneous presentation of a visual picture and an auditory noise; Fn, simultaneous presentation of an auditory noise and a visual noise) to remove the effect of co-stimulation integration and extract data regarding high-order semantic integration. Electroencephalography with a high temporal resolution was used to examine the neural mechanisms associated with co-stimulation-removed audiovisual semantic integration in attended and unattended conditions. By comparing the (AV + Fn) and (An+Vn), we identified three effects related to co-stimulation-removed audiovisual semantic integration. In the attended condition, two semantic integration effects over bilateral occipito-temporal regions at 220-240 ms and over frontal region at 560-600 ms were observed. In the unattended condition, only one semantic integration effect over centro-frontal region at 340-360 ms was observed. These effects reflected the semantic integration processes of pictures and sounds after removing the co-stimulation caused by spatiotemporal consistency. Moreover, the discrepancy in these effects in temporal and spatial distribution implied distinct neural mechanisms underlying attended and unattended semantic integration. In the attended condition, the audiovisual semantic information was initially integrated based on the semantic congruency (220-240 ms) and then reanalyzed according to the current task (560-600 ms), which was a goal-driven process and influenced by top-down attention. Contrastingly, in the unattended condition, no attention resources were allocated and the semantic integration (340-360 ms) was an unconscious automatic process.
多感官对象的整合包含语义信息,涉及到低水平的共同刺激处理和高级语义整合。为了研究视听语义整合,我们利用双模态刺激(AV,同时呈现听觉声音和视觉图像;An,同时呈现听觉声音和视觉噪声;Vn,同时呈现视觉图像和听觉噪声;Fn,同时呈现听觉噪声和视觉噪声)来消除共同刺激整合的影响,并提取关于高级语义整合的数据。使用具有高时间分辨率的脑电图来检查与注意力集中和不集中条件下的共同刺激消除的视听语义整合相关的神经机制。通过比较(AV + Fn)和(An + Vn),我们确定了与共同刺激消除的视听语义整合相关的三个效应。在注意力集中条件下,在双侧枕颞区观察到两个语义整合效应,在额区观察到一个语义整合效应,时间分别为 220-240ms 和 560-600ms。在注意力不集中条件下,仅在中-额区观察到一个语义整合效应,时间为 340-360ms。这些效应反映了在去除时空一致性引起的共同刺激后,图片和声音的语义整合过程。此外,这些效应在时间和空间分布上的差异暗示了注意力集中和不集中语义整合的不同神经机制。在注意力集中条件下,视听语义信息最初基于语义一致性进行整合(220-240ms),然后根据当前任务进行重新分析(560-600ms),这是一个目标驱动的过程,受到自上而下的注意力的影响。相比之下,在注意力不集中条件下,没有分配注意力资源,语义整合(340-360ms)是一个无意识的自动过程。