Suppr超能文献

用树突状神经元对基于重复的听觉和视觉源恢复进行建模。

Modeling the Repetition-Based Recovering of Acoustic and Visual Sources With Dendritic Neurons.

作者信息

Dellaferrera Giorgia, Asabuki Toshitake, Fukai Tomoki

机构信息

Neural Coding and Brain Computing Unit, Okinawa Institute of Science and Technology, Okinawa, Japan.

Institute of Neuroinformatics, University of Zurich and Swiss Federal Institute of Technology Zurich (ETH), Zurich, Switzerland.

出版信息

Front Neurosci. 2022 Apr 28;16:855753. doi: 10.3389/fnins.2022.855753. eCollection 2022.

Abstract

In natural auditory environments, acoustic signals originate from the temporal superimposition of different sound sources. The problem of inferring individual sources from ambiguous mixtures of sounds is known as blind source decomposition. Experiments on humans have demonstrated that the auditory system can identify sound sources as repeating patterns embedded in the acoustic input. Source repetition produces temporal regularities that can be detected and used for segregation. Specifically, listeners can identify sounds occurring more than once across different mixtures, but not sounds heard only in a single mixture. However, whether such a behavior can be computationally modeled has not yet been explored. Here, we propose a biologically inspired computational model to perform blind source separation on sequences of mixtures of acoustic stimuli. Our method relies on a somatodendritic neuron model trained with a Hebbian-like learning rule which was originally conceived to detect spatio-temporal patterns recurring in synaptic inputs. We show that the segregation capabilities of our model are reminiscent of the features of human performance in a variety of experimental settings involving synthesized sounds with naturalistic properties. Furthermore, we extend the study to investigate the properties of segregation on task settings not yet explored with human subjects, namely natural sounds and images. Overall, our work suggests that somatodendritic neuron models offer a promising neuro-inspired learning strategy to account for the characteristics of the brain segregation capabilities as well as to make predictions on yet untested experimental settings.

摘要

在自然听觉环境中,声学信号源于不同声源的时间叠加。从含混的声音混合中推断出各个声源的问题被称为盲源分解。对人类进行的实验表明,听觉系统能够将声源识别为嵌入声学输入中的重复模式。声源重复会产生可被检测并用于分离的时间规律。具体而言,听众能够识别在不同混合声音中出现不止一次的声音,但无法识别仅在单一混合声音中听到的声音。然而,这种行为是否能够通过计算进行建模尚未得到探索。在此,我们提出一种受生物学启发的计算模型,用于对声学刺激混合序列进行盲源分离。我们的方法依赖于一种体树突神经元模型,该模型通过类似赫布学习规则进行训练,该规则最初旨在检测突触输入中反复出现时空模式。我们表明,在各种涉及具有自然属性的合成声音的实验设置中,我们模型的分离能力让人联想到人类表现的特征。此外,我们将研究扩展到尚未对人类受试者进行探索的任务设置中分离属性的研究,即自然声音和图像。总体而言,我们的工作表明,体树突神经元模型提供了一种有前景的受神经启发学习策略,以解释大脑分离能力的特征,并对尚未测试的实验设置进行预测。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d50f/9097820/a2f58ddd7d5a/fnins-16-855753-g0001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验