Suppr超能文献

具有内在可塑性的动态神经场

Dynamic Neural Fields with Intrinsic Plasticity.

作者信息

Strub Claudius, Schöner Gregor, Wörgötter Florentin, Sandamirskaya Yulia

机构信息

Autonomous Robotics Lab, Institut für Neuroinformatik, Ruhr-UniversitätBochum, Germany.

Department of Computational Neuroscience, III Physics Institute, Georg-August-UniversitätGöttingen, Germany.

出版信息

Front Comput Neurosci. 2017 Aug 31;11:74. doi: 10.3389/fncom.2017.00074. eCollection 2017.

Abstract

Dynamic neural fields (DNFs) are dynamical systems models that approximate the activity of large, homogeneous, and recurrently connected neural networks based on a mean field approach. Within dynamic field theory, the DNFs have been used as building blocks in architectures to model sensorimotor embedding of cognitive processes. Typically, the parameters of a DNF in an architecture are manually tuned in order to achieve a specific dynamic behavior (e.g., decision making, selection, or working memory) for a given input pattern. This manual parameters search requires expert knowledge and time to find and verify a suited set of parameters. The DNF parametrization may be particular challenging if the input distribution is not known in advance, e.g., when processing sensory information. In this paper, we propose the autonomous adaptation of the DNF resting level and gain by a learning mechanism of intrinsic plasticity (IP). To enable this adaptation, an input and output measure for the DNF are introduced, together with a hyper parameter to define the desired output distribution. The online adaptation by IP gives the possibility to pre-define the DNF output statistics without knowledge of the input distribution and thus, also to compensate for changes in it. The capabilities and limitations of this approach are evaluated in a number of experiments.

摘要

动态神经场(DNFs)是一种动力学系统模型,它基于平均场方法来近似大型、均匀且具有循环连接的神经网络的活动。在动态场理论中,DNFs已被用作架构中的构建模块,以对认知过程的感觉运动嵌入进行建模。通常,架构中DNF的参数是手动调整的,以便针对给定的输入模式实现特定的动态行为(例如决策、选择或工作记忆)。这种手动参数搜索需要专业知识和时间来找到并验证一组合适的参数。如果输入分布事先未知,例如在处理感官信息时,DNF参数化可能会特别具有挑战性。在本文中,我们提出通过内在可塑性(IP)的学习机制对DNF的静息水平和增益进行自主调整。为了实现这种调整,引入了DNF的输入和输出度量,以及一个超参数来定义所需的输出分布。通过IP进行的在线调整使得在不知道输入分布的情况下预先定义DNF输出统计成为可能,因此也能够补偿其中的变化。在一系列实验中评估了这种方法的能力和局限性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1e37/5583149/f561cdabf2d0/fncom-11-00074-g0001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验