• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

反向传播神经网络树。

Backpropagation Neural Tree.

机构信息

Department of Computer Science, University of Reading, Reading, UK.

Center of System Biology, University of Cambridge, Cambridge, UK; Department of Biomedical & Biotechnological Sciences, University of Catania, Catania, Italy.

出版信息

Neural Netw. 2022 May;149:66-83. doi: 10.1016/j.neunet.2022.02.003. Epub 2022 Feb 10.

DOI:10.1016/j.neunet.2022.02.003
PMID:35193079
Abstract

We propose a novel algorithm called Backpropagation Neural Tree (BNeuralT), which is a stochastic computational dendritic tree. BNeuralT takes random repeated inputs through its leaves and imposes dendritic nonlinearities through its internal connections like a biological dendritic tree would do. Considering the dendritic-tree like plausible biological properties, BNeuralT is a single neuron neural tree model with its internal sub-trees resembling dendritic nonlinearities. BNeuralT algorithm produces an ad hoc neural tree which is trained using a stochastic gradient descent optimizer like gradient descent (GD), momentum GD, Nesterov accelerated GD, Adagrad, RMSprop, or Adam. BNeuralT training has two phases, each computed in a depth-first search manner: the forward pass computes neural tree's output in a post-order traversal, while the error backpropagation during the backward pass is performed recursively in a pre-order traversal. A BNeuralT model can be considered a minimal subset of a neural network (NN), meaning it is a "thinned" NN whose complexity is lower than an ordinary NN. Our algorithm produces high-performing and parsimonious models balancing the complexity with descriptive ability on a wide variety of machine learning problems: classification, regression, and pattern recognition.

摘要

我们提出了一种名为反向传播神经树(BNeuralT)的新算法,它是一种随机计算的树突。BNeuralT 通过其叶子随机重复输入,并通过其内部连接施加树突非线性,就像生物树突一样。考虑到树突状的合理生物特性,BNeuralT 是一个具有内部子树类似于树突非线性的单个神经元神经树模型。BNeuralT 算法生成一个特殊的神经树,使用随机梯度下降优化器(如梯度下降(GD)、动量 GD、Nesterov 加速 GD、Adagrad、RMSprop 或 Adam)进行训练。BNeuralT 训练有两个阶段,每个阶段都以深度优先搜索的方式计算:前向传递以后序遍历计算神经树的输出,而反向传播期间的误差反向传播则以先序遍历递归进行。BNeuralT 模型可以被认为是神经网络(NN)的最小子集,这意味着它是一个“稀疏”的 NN,其复杂性低于普通的 NN。我们的算法在各种机器学习问题上产生了高性能和简约的模型,平衡了复杂度和描述能力:分类、回归和模式识别。

相似文献

1
Backpropagation Neural Tree.反向传播神经网络树。
Neural Netw. 2022 May;149:66-83. doi: 10.1016/j.neunet.2022.02.003. Epub 2022 Feb 10.
2
A Novel Learning Algorithm to Optimize Deep Neural Networks: Evolved Gradient Direction Optimizer (EVGO).一种优化深度神经网络的新型学习算法:进化梯度方向优化器(EVGO)。
IEEE Trans Neural Netw Learn Syst. 2021 Feb;32(2):685-694. doi: 10.1109/TNNLS.2020.2979121. Epub 2021 Feb 4.
3
A Unified Analysis of AdaGrad With Weighted Aggregation and Momentum Acceleration.结合加权聚合与动量加速的AdaGrad统一分析
IEEE Trans Neural Netw Learn Syst. 2024 Oct;35(10):14482-14490. doi: 10.1109/TNNLS.2023.3279381. Epub 2024 Oct 7.
4
A novel adaptive momentum method for medical image classification using convolutional neural network.基于卷积神经网络的医学图像分类自适应动量方法
BMC Med Imaging. 2022 Mar 1;22(1):34. doi: 10.1186/s12880-022-00755-z.
5
Improving the efficiency of RMSProp optimizer by utilizing Nestrove in deep learning.利用 Nestrove 提高深度学习中 RMSProp 优化器的效率。
Sci Rep. 2023 May 31;13(1):8814. doi: 10.1038/s41598-023-35663-x.
6
A novel single neuron perceptron with universal approximation and XOR computation properties.一种具有通用逼近和异或计算特性的新型单神经元感知器。
Comput Intell Neurosci. 2014;2014:746376. doi: 10.1155/2014/746376. Epub 2014 Apr 28.
7
Optimizing neural networks for medical data sets: A case study on neonatal apnea prediction.优化神经网络在医学数据集上的应用:以新生儿呼吸暂停预测为例的研究
Artif Intell Med. 2019 Jul;98:59-76. doi: 10.1016/j.artmed.2019.07.008. Epub 2019 Jul 25.
8
Learning smooth dendrite morphological neurons by stochastic gradient descent for pattern classification.通过随机梯度下降学习用于模式分类的平滑树突形态神经元。
Neural Netw. 2023 Nov;168:665-676. doi: 10.1016/j.neunet.2023.09.033. Epub 2023 Sep 25.
9
Direct Feedback Alignment With Sparse Connections for Local Learning.用于局部学习的具有稀疏连接的直接反馈对齐
Front Neurosci. 2019 May 24;13:525. doi: 10.3389/fnins.2019.00525. eCollection 2019.
10
SNN: Time step reduction of spiking surrogate gradients for training energy efficient single-step spiking neural networks.SNN:用于训练节能单步脉冲神经网络的脉冲替代梯度的时间步长缩减
Neural Netw. 2023 Feb;159:208-219. doi: 10.1016/j.neunet.2022.12.008. Epub 2022 Dec 19.

引用本文的文献

1
Design and Analysis of Hospital Throughput Maximization Algorithm under COVID-19 Pandemic.COVID-19 大流行下的医院吞吐量最大化算法设计与分析。
Comput Math Methods Med. 2022 Aug 11;2022:8127055. doi: 10.1155/2022/8127055. eCollection 2022.