Suppr超能文献

解析跨时空的视听感知:一种受神经启发的计算架构

Unraveling Audiovisual Perception Across Space and Time: A Neuroinspired Computational Architecture.

作者信息

Cuppini Cristiano, Di Rosa Eleonore F, Astolfi Laura, Monti Melissa

机构信息

Department of Electrical, Electronic and Information Engineering "Guglielmo Marconi" (DEI), University of Bologna, Bologna, Italy.

Department of Translational Neuroscience, Wake Forest University School of Medicine, Winston-Salem, North Carolina, USA.

出版信息

Eur J Neurosci. 2025 Aug;62(3):e70217. doi: 10.1111/ejn.70217.

Abstract

Accurate perception of audiovisual stimuli depends crucially on the spatial and temporal properties of each sensory component, with multisensory enhancement only occurring if those components are presented in spatiotemporal congruency. Although spatial localization and temporal detection of audiovisual signals have each been extensively studied, the neural mechanisms underlying their joint influence, particularly in spatiotemporally misaligned contexts, remain poorly understood. Moreover, empirical dissection of their respective contributions to behavioral outcomes proves challenging when spatial and temporal disparities are introduced concurrently. Here, we sought to elucidate the mutual interaction of temporal and spatial offsets on the neural encoding of audiovisual stimuli. To this end, we developed a biologically inspired neurocomputational model that reproduces behavioral evidence of perceptual phenomena observed in audiovisual tasks, i.e., the modality switch effect (temporal realm) and the ventriloquist effect (spatial realm). Tested against the race model, our network successfully simulates multisensory enhancement in reaction times due to the concurrent presentation of cross-modal stimuli. Further investigation on the mechanisms implemented in the network upheld the centrality of cross-sensory inhibition in explaining modality switch effects and of cross-modal and lateral intra-area connections in regulating the evolution of these effects in space. Finally, the model predicts an amelioration in temporal detection of different modality stimuli with increasing between-stimuli eccentricity and indicates a plausible reduction in auditory localization bias for increasing interstimulus interval between spatially disparate cues. Our findings provide novel insights into the neural computations underlying audiovisual perception and offer a comprehensive predictive framework to guide future experimental investigations of multisensory integration.

摘要

对视听刺激的准确感知关键取决于每个感官成分的空间和时间特性,只有当这些成分在时空上一致呈现时才会出现多感官增强。尽管视听信号的空间定位和时间检测都已得到广泛研究,但它们共同影响的神经机制,尤其是在时空不一致的情况下,仍然知之甚少。此外,当同时引入空间和时间差异时,对它们各自对行为结果的贡献进行实证剖析具有挑战性。在这里,我们试图阐明时间和空间偏移对视听刺激神经编码的相互作用。为此,我们开发了一种受生物启发的神经计算模型,该模型再现了在视听任务中观察到的感知现象的行为证据,即模态切换效应(时间领域)和腹语效应(空间领域)。通过与竞争模型进行测试,我们的网络成功模拟了由于跨模态刺激的同时呈现而导致的反应时间中的多感官增强。对网络中实现的机制的进一步研究支持了交叉感官抑制在解释模态切换效应中的核心地位,以及交叉模态和区域内横向连接在调节这些效应在空间中的演变中的核心地位。最后,该模型预测随着刺激间离心率的增加,不同模态刺激的时间检测会有所改善,并表明对于空间上不同线索之间刺激间隔的增加,听觉定位偏差可能会降低。我们的研究结果为视听感知背后的神经计算提供了新的见解,并提供了一个全面的预测框架,以指导未来多感官整合的实验研究。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验