Suppr超能文献

一个在需要时生长的自组织网络。

A self-organising network that grows when required.

作者信息

Marsland Stephen, Shapiro Jonathan, Nehmzow Ulrich

机构信息

Division of Imaging Science and Biomedical Engineering, University of Manchester, UK.

出版信息

Neural Netw. 2002 Oct-Nov;15(8-9):1041-58. doi: 10.1016/s0893-6080(02)00078-3.

Abstract

The ability to grow extra nodes is a potentially useful facility for a self-organising neural network. A network that can add nodes into its map space can approximate the input space more accurately, and often more parsimoniously, than a network with predefined structure and size, such as the Self-Organising Map. In addition, a growing network can deal with dynamic input distributions. Most of the growing networks that have been proposed in the literature add new nodes to support the node that has accumulated the highest error during previous iterations or to support topological structures. This usually means that new nodes are added only when the number of iterations is an integer multiple of some pre-defined constant, A. This paper suggests a way in which the learning algorithm can add nodes whenever the network in its current state does not sufficiently match the input. In this way the network grows very quickly when new data is presented, but stops growing once the network has matched the data. This is particularly important when we consider dynamic data sets, where the distribution of inputs can change to a new regime after some time. We also demonstrate the preservation of neighbourhood relations in the data by the network. The new network is compared to an existing growing network, the Growing Neural Gas (GNG), on a artificial dataset, showing how the network deals with a change in input distribution after some time. Finally, the new network is applied to several novelty detection tasks and is compared with both the GNG and an unsupervised form of the Reduced Coulomb Energy network on a robotic inspection task and with a Support Vector Machine on two benchmark novelty detection tasks.

摘要

对于自组织神经网络而言,生长额外节点的能力是一项潜在有用的功能。与具有预定义结构和大小的网络(如自组织映射网络)相比,能够在其映射空间中添加节点的网络可以更准确且通常更简洁地逼近输入空间。此外,生长型网络能够处理动态输入分布。文献中提出的大多数生长型网络通过添加新节点来支持在前一轮迭代中累积误差最高的节点或支持拓扑结构。这通常意味着仅当迭代次数是某个预定义常数A的整数倍时才添加新节点。本文提出了一种学习算法,只要网络的当前状态与输入的匹配度不够,该算法就能添加节点。通过这种方式,当呈现新数据时网络会快速生长,但一旦网络与数据匹配就会停止生长。当我们考虑动态数据集时,这一点尤为重要,因为在一段时间后输入的分布可能会转变为新的状态。我们还展示了网络对数据中邻域关系的保留。在一个人工数据集上,将新网络与现有的生长型网络——生长神经气网络(GNG)进行比较,展示了该网络在一段时间后如何处理输入分布的变化。最后,将新网络应用于多个新奇性检测任务,并在机器人检测任务中与GNG和简化库仑能量网络的无监督形式进行比较,在两个基准新奇性检测任务中与支持向量机进行比较。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验