Universitat Pompeu Fabra, Barcelona, Spain.
J Cogn Neurosci. 2010 Feb;22(2):240-7. doi: 10.1162/jocn.2009.21202.
Audiovisual speech perception provides an opportunity to investigate the mechanisms underlying multimodal processing. By using nonspeech stimuli, it is possible to investigate the degree to which audiovisual processing is specific to the speech domain. It has been shown in a match-to-sample design that matching across modalities is more difficult in the nonspeech domain as compared to the speech domain. We constructed a biophysically realistic neural network model simulating this experimental evidence. We propose that a stronger connection between modalities in speech underlies the behavioral difference between the speech and the nonspeech domain. This could be the result of more extensive experience with speech stimuli. Because the match-to-sample paradigm does not allow us to draw conclusions concerning the integration of auditory and visual information, we also simulated two further conditions based on the same paradigm, which tested the integration of auditory and visual information within a single stimulus. New experimental data for these two conditions support the simulation results and suggest that audiovisual integration of discordant stimuli is stronger in speech than in nonspeech stimuli. According to the simulations, the connection strength between auditory and visual information, on the one hand, determines how well auditory information can be assigned to visual information, and on the other hand, it influences the magnitude of multimodal integration.
视听言语感知为研究多模态处理的机制提供了机会。通过使用非言语刺激,可以研究视听处理在多大程度上特定于言语领域。在匹配样本设计中已经表明,与言语领域相比,跨模态的匹配在非言语领域更困难。我们构建了一个生物物理上逼真的神经网络模型,模拟了这一实验证据。我们提出,在言语中模态之间更强的连接是言语和非言语领域之间行为差异的基础。这可能是由于与言语刺激的更多广泛的经验。由于匹配样本范式不允许我们根据听觉和视觉信息的整合得出结论,我们还根据相同的范式模拟了另外两个条件,该范式测试了单个刺激中听觉和视觉信息的整合。这两个条件的新实验数据支持了模拟结果,并表明在言语中,不和谐刺激的视听整合比在非言语刺激中更强。根据模拟,一方面,听觉和视觉信息之间的连接强度决定了听觉信息可以分配给视觉信息的程度,另一方面,它影响了多模态整合的程度。