Suppr超能文献

神经网络结构的层次增长:按照层次复杂性的顺序组织输入。

Hierarchical growth in neural networks structure: Organizing inputs by Order of Hierarchical Complexity.

机构信息

CINTESIS - Center for Health Technology and Services Research, Porto, Portugal.

Dare Association, Inc. Boston, Massachusetts, United States of America.

出版信息

PLoS One. 2023 Aug 31;18(8):e0290743. doi: 10.1371/journal.pone.0290743. eCollection 2023.

Abstract

Several studies demonstrate that the structure of the brain increases in hierarchical complexity throughout development. We tested if the structure of artificial neural networks also increases in hierarchical complexity while learning a developing task, called the balance beam problem. Previous simulations of this developmental task do not reflect a necessary premise underlying development: a more complex structure can be built out of less complex ones, while ensuring that the more complex structure does not replace the less complex one. In order to address this necessity, we segregated the input set by subsets of increasing Orders of Hierarchical Complexity. This is a complexity measure that has been extensively shown to underlie the complexity behavior and hypothesized to underlie the complexity of the neural structure of the brain. After segregating the input set, minimal neural network models were trained separately for each input subset, and adjacent complexity models were analyzed sequentially to observe whether there was a structural progression. Results show that three different network structural progressions were found, performing with similar accuracy, pointing towards self-organization. Also, more complex structures could be built out of less complex ones without substituting them, successfully addressing catastrophic forgetting and leveraging performance of previous models in the literature. Furthermore, the model structures trained on the two highest complexity subsets performed better than simulations of the balance beam present in the literature. As a major contribution, this work was successful in addressing hierarchical complexity structural growth in neural networks, and is the first that segregates inputs by Order of Hierarchical Complexity. Since this measure can be applied to all domains of data, the present method can be applied to future simulations, systematizing the simulation of developmental and evolutionary structural growth in neural networks.

摘要

多项研究表明,大脑的结构在发育过程中会增加层次复杂性。我们测试了人工神经网络的结构是否也会在学习一个发展任务(称为平衡木问题)时增加层次复杂性。以前对这个发展任务的模拟并没有反映出发展的一个必要前提:可以用更简单的结构构建出更复杂的结构,同时确保更复杂的结构不会取代更简单的结构。为了解决这个必要性,我们通过层次复杂性的递增阶数来划分输入集。这是一种已经被广泛证明是大脑神经结构复杂性基础的复杂性度量。在对输入集进行分区后,我们分别为每个输入子集训练最小神经网络模型,并对相邻的复杂模型进行顺序分析,以观察是否存在结构进展。结果表明,发现了三种不同的网络结构进展,它们的性能相似,表明存在自组织。此外,可以用更简单的结构构建出更复杂的结构,而不会取代它们,从而成功解决了灾难性遗忘问题,并利用了文献中先前模型的性能。此外,在两个最高复杂度子集上训练的模型结构的性能优于文献中的平衡木模拟。作为一个主要贡献,这项工作成功地解决了神经网络中层次复杂性的结构增长问题,也是第一个通过层次复杂性的阶数来划分输入的工作。由于这个度量可以应用于所有数据领域,因此本方法可以应用于未来的模拟,从而使神经网络中发展和进化结构增长的模拟系统化。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/86b8/10470958/ded7a3f63bef/pone.0290743.g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验