Suppr超能文献

层次结构概念学习。

Learning hierarchically-structured concepts.

机构信息

Massachusetts Institute of Technology, Cambridge, MA, USA.

King's College London, London, England, United Kingdom.

出版信息

Neural Netw. 2021 Nov;143:798-817. doi: 10.1016/j.neunet.2021.07.033. Epub 2021 Aug 16.

Abstract

We use a recently developed synchronous Spiking Neural Network (SNN) model to study the problem of learning hierarchically-structured concepts. We introduce an abstract data model that describes simple hierarchical concepts. We define a feed-forward layered SNN model, with learning modeled using Oja's local learning rule, a well known biologically-plausible rule for adjusting synapse weights. We define what it means for such a network to recognize hierarchical concepts; our notion of recognition is robust, in that it tolerates a bounded amount of noise. Then, we present a learning algorithm by which a layered network may learn to recognize hierarchical concepts according to our robust definition. We analyze correctness and performance rigorously; the amount of time required to learn each concept, after learning all of the sub-concepts, is approximately O1ηkℓmaxlog(k)+1ɛ+blog(k), where k is the number of sub-concepts per concept, ℓmax is the maximum hierarchical depth, η is the learning rate, ɛ describes the amount of uncertainty allowed in robust recognition, and b describes the amount of weight decrease for "irrelevant" edges. An interesting feature of this algorithm is that it allows the network to learn sub-concepts in a highly interleaved manner. This algorithm assumes that the concepts are presented in a noise-free way; we also extend these results to accommodate noise in the learning process. Finally, we give a simple lower bound saying that, in order to recognize concepts with hierarchical depth two with noise-tolerance, a neural network should have at least two layers. The results in this paper represent first steps in the theoretical study of hierarchical concepts using SNNs. The cases studied here are basic, but they suggest many directions for extensions to more elaborate and realistic cases.

摘要

我们使用最近开发的同步尖峰神经网络(SNN)模型来研究学习层次结构概念的问题。我们引入了一个描述简单层次概念的抽象数据模型。我们定义了一个前馈分层 SNN 模型,使用 Oja 的局部学习规则进行学习,这是一种调整突触权重的生物上合理的规则。我们定义了这样的网络识别层次结构概念的含义;我们的识别概念是稳健的,因为它可以容忍一定量的噪声。然后,我们提出了一种学习算法,通过该算法分层网络可以根据我们的稳健定义学习识别层次结构概念。我们严格地分析了正确性和性能;在学习所有子概念之后,每个概念所需的学习时间大约为 O1ηkℓmaxlog(k)+1ɛ+blog(k),其中 k 是每个概念的子概念数量,ℓmax 是最大层次深度,η是学习率,ɛ描述了稳健识别允许的不确定性量,b 描述了“不相关”边缘的权重减少量。该算法的一个有趣特征是它允许网络以高度交错的方式学习子概念。该算法假设概念以无噪声的方式呈现;我们还扩展了这些结果以适应学习过程中的噪声。最后,我们给出了一个简单的下限,说明为了在有噪声容忍的情况下识别具有层次深度的概念,神经网络至少应有两层。本文的结果代表了使用 SNN 对层次结构概念进行理论研究的初步步骤。这里研究的情况是基本的,但它们为更精细和现实的情况的扩展提出了许多方向。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验