• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

面向深度自适应铰链超平面。

Toward Deep Adaptive Hinging Hyperplanes.

出版信息

IEEE Trans Neural Netw Learn Syst. 2022 Nov;33(11):6373-6387. doi: 10.1109/TNNLS.2021.3079113. Epub 2022 Oct 27.

DOI:10.1109/TNNLS.2021.3079113
PMID:34048348
Abstract

The adaptive hinging hyperplane (AHH) model is a popular piecewise linear representation with a generalized tree structure and has been successfully applied in dynamic system identification. In this article, we aim to construct the deep AHH (DAHH) model to extend and generalize the networking of AHH model for high-dimensional problems. The network structure of DAHH is determined through a forward growth, in which the activity ratio is introduced to select effective neurons and no connecting weights are involved between the layers. Then, all neurons in the DAHH network can be flexibly connected to the output in a skip-layer format, and only the corresponding weights are the parameters to optimize. With such a network framework, the backpropagation algorithm can be implemented in DAHH to efficiently tackle large-scale problems and the gradient vanishing problem is not encountered in the training of DAHH. In fact, the optimization problem of DAHH can maintain convexity with convex loss in the output layer, which brings natural advantages in optimization. Different from the existing neural networks, DAHH is easier to interpret, where neurons are connected sparsely and analysis of variance (ANOVA) decomposition can be applied, facilitating to revealing the interactions between variables. A theoretical analysis toward universal approximation ability and explicit domain partitions are also derived. Numerical experiments verify the effectiveness of the proposed DAHH.

摘要

自适应铰链超平面 (AHH) 模型是一种流行的分段线性表示方法,具有广义树状结构,已成功应用于动态系统辨识。本文旨在构建深度 AHH (DAHH) 模型,以扩展和推广 AHH 模型在高维问题中的网络结构。DAHH 的网络结构通过前向生长确定,其中引入活动比来选择有效神经元,并且各层之间没有连接权重。然后,DAHH 网络中的所有神经元都可以以跳过层的格式灵活地连接到输出,并且只有相应的权重是需要优化的参数。通过这样的网络框架,可以在 DAHH 中实现反向传播算法,以有效地处理大规模问题,并且在 DAHH 的训练中不会遇到梯度消失问题。实际上,DAHH 的优化问题可以在输出层保持凸性和凸损失,这在优化中带来了自然的优势。与现有的神经网络不同,DAHH 更容易解释,其中神经元稀疏连接,可以应用方差分析 (ANOVA) 分解,有助于揭示变量之间的相互作用。还推导出了对通用逼近能力和显式域分区的理论分析。数值实验验证了所提出的 DAHH 的有效性。

相似文献

1
Toward Deep Adaptive Hinging Hyperplanes.面向深度自适应铰链超平面。
IEEE Trans Neural Netw Learn Syst. 2022 Nov;33(11):6373-6387. doi: 10.1109/TNNLS.2021.3079113. Epub 2022 Oct 27.
2
Successfully and efficiently training deep multi-layer perceptrons with logistic activation function simply requires initializing the weights with an appropriate negative mean.成功高效地训练具有逻辑激活函数的深层多层感知机,只需要用适当的负均值初始化权重。
Neural Netw. 2022 Sep;153:87-103. doi: 10.1016/j.neunet.2022.05.030. Epub 2022 Jun 7.
3
Novel maximum-margin training algorithms for supervised neural networks.用于监督神经网络的新型最大间隔训练算法。
IEEE Trans Neural Netw. 2010 Jun;21(6):972-84. doi: 10.1109/TNN.2010.2046423. Epub 2010 Apr 19.
4
Evolving Connections in Group of Neurons for Robust Learning.神经元群中连接的演变对于稳健学习至关重要。
IEEE Trans Cybern. 2022 May;52(5):3069-3082. doi: 10.1109/TCYB.2020.3022673. Epub 2022 May 19.
5
A Novel Learning Algorithm to Optimize Deep Neural Networks: Evolved Gradient Direction Optimizer (EVGO).一种优化深度神经网络的新型学习算法:进化梯度方向优化器(EVGO)。
IEEE Trans Neural Netw Learn Syst. 2021 Feb;32(2):685-694. doi: 10.1109/TNNLS.2020.2979121. Epub 2021 Feb 4.
6
A new optimized GA-RBF neural network algorithm.一种新的优化遗传算法-径向基函数神经网络算法。
Comput Intell Neurosci. 2014;2014:982045. doi: 10.1155/2014/982045. Epub 2014 Oct 13.
7
An Approximation of the Error Backpropagation Algorithm in a Predictive Coding Network with Local Hebbian Synaptic Plasticity.具有局部赫布突触可塑性的预测编码网络中误差反向传播算法的一种近似
Neural Comput. 2017 May;29(5):1229-1262. doi: 10.1162/NECO_a_00949. Epub 2017 Mar 23.
8
A lightweight and gradient-stable neural layer.一种轻量级且梯度稳定的神经层。
Neural Netw. 2024 Jul;175:106269. doi: 10.1016/j.neunet.2024.106269. Epub 2024 Mar 26.
9
Noise can speed backpropagation learning and deep bidirectional pretraining.噪声可以加速反向传播学习和深度双向预训练。
Neural Netw. 2020 Sep;129:359-384. doi: 10.1016/j.neunet.2020.04.004. Epub 2020 Apr 11.
10
A privacy preservation framework for feedforward-designed convolutional neural networks.前馈式卷积神经网络的隐私保护框架。
Neural Netw. 2022 Nov;155:14-27. doi: 10.1016/j.neunet.2022.08.005. Epub 2022 Aug 10.