• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

对数和指数神经网络差的通用逼近结果。

A Universal Approximation Result for Difference of Log-Sum-Exp Neural Networks.

出版信息

IEEE Trans Neural Netw Learn Syst. 2020 Dec;31(12):5603-5612. doi: 10.1109/TNNLS.2020.2975051. Epub 2020 Nov 30.

DOI:10.1109/TNNLS.2020.2975051
PMID:32167912
Abstract

We show that a neural network whose output is obtained as the difference of the outputs of two feedforward networks with exponential activation function in the hidden layer and logarithmic activation function in the output node, referred to as log-sum-exp (LSE) network, is a smooth universal approximator of continuous functions over convex, compact sets. By using a logarithmic transform, this class of network maps to a family of subtraction-free ratios of generalized posynomials (GPOS), which we also show to be universal approximators of positive functions over log-convex, compact subsets of the positive orthant. The main advantage of difference-LSE networks with respect to classical feedforward neural networks is that, after a standard training phase, they provide surrogate models for a design that possesses a specific difference-of-convex-functions form, which makes them optimizable via relatively efficient numerical methods. In particular, by adapting an existing difference-of-convex algorithm to these models, we obtain an algorithm for performing an effective optimization-based design. We illustrate the proposed approach by applying it to the data-driven design of a diet for a patient with type-2 diabetes and to a nonconvex optimization problem.

摘要

我们证明了一种神经网络,其输出是通过将具有指数激活函数的前馈网络的输出与具有对数激活函数的输出节点的输出之差获得的,称为对数和 - 指数(LSE)网络,是凸、紧集上连续函数的平滑通用逼近器。通过使用对数变换,该网络类映射到广义正多项式(GPOS)的无减法比的族,我们还证明了它们是正半轴上对数凸、紧子集上正函数的通用逼近器。具有差 - LSE 网络的主要优点相对于经典前馈神经网络在于,在标准训练阶段之后,它们为具有特定凸函数差形式的设计提供了替代模型,这使得它们可以通过相对有效的数值方法进行优化。特别是,通过将现有的凸函数差算法应用于这些模型,我们得到了一种基于有效优化的设计的算法。我们通过将其应用于 2 型糖尿病患者的饮食的基于数据的设计和非凸优化问题来说明所提出的方法。

相似文献

1
A Universal Approximation Result for Difference of Log-Sum-Exp Neural Networks.对数和指数神经网络差的通用逼近结果。
IEEE Trans Neural Netw Learn Syst. 2020 Dec;31(12):5603-5612. doi: 10.1109/TNNLS.2020.2975051. Epub 2020 Nov 30.
2
Log-Sum-Exp Neural Networks and Posynomial Models for Convex and Log-Log-Convex Data.对数和模型神经网络和次指数模型在凸和对数-对数凸数据中的应用。
IEEE Trans Neural Netw Learn Syst. 2020 Mar;31(3):827-838. doi: 10.1109/TNNLS.2019.2910417. Epub 2019 May 15.
3
Parameterized Convex Universal Approximators for Decision-Making Problems.
IEEE Trans Neural Netw Learn Syst. 2024 Feb;35(2):2448-2459. doi: 10.1109/TNNLS.2022.3190198. Epub 2024 Feb 5.
4
Two-hidden-layer feed-forward networks are universal approximators: A constructive approach.两层前馈神经网络是万能逼近器:一种构造方法。
Neural Netw. 2020 Nov;131:29-36. doi: 10.1016/j.neunet.2020.07.021. Epub 2020 Jul 22.
5
Universal approximation using incremental constructive feedforward networks with random hidden nodes.使用具有随机隐藏节点的增量式构造前馈网络的通用逼近
IEEE Trans Neural Netw. 2006 Jul;17(4):879-892. doi: 10.1109/TNN.2006.875977.
6
Universal approximation of extreme learning machine with adaptive growth of hidden nodes.具有隐节点自适应增长的极限学习机的通用逼近。
IEEE Trans Neural Netw Learn Syst. 2012 Feb;23(2):365-71. doi: 10.1109/TNNLS.2011.2178124.
7
Smoothing neural network for L regularized optimization problem with general convex constraints.具有一般凸约束的 L 正则化优化问题的平滑神经网络。
Neural Netw. 2021 Nov;143:678-689. doi: 10.1016/j.neunet.2021.08.001. Epub 2021 Aug 8.
8
Constructive approximation to multivariate function by decay RBF neural network.基于衰减径向基函数神经网络的多元函数构造逼近
IEEE Trans Neural Netw. 2010 Sep;21(9):1517-23. doi: 10.1109/TNN.2010.2055888. Epub 2010 Aug 5.
9
A learning rule for very simple universal approximators consisting of a single layer of perceptrons.一种由单层感知器组成的非常简单的通用逼近器的学习规则。
Neural Netw. 2008 Jun;21(5):786-95. doi: 10.1016/j.neunet.2007.12.036. Epub 2007 Dec 31.
10
Multifeedback-layer neural network.多反馈层神经网络。
IEEE Trans Neural Netw. 2007 Mar;18(2):373-84. doi: 10.1109/TNN.2006.885439.

引用本文的文献

1
Dislocation Substructures Evolution and an Informer Constitutive Model for a Ti-55511 Alloy in Two-Stages High-Temperature Forming with Variant Strain Rates in β Region.β 区变应变速率两阶段高温成形过程中 Ti-55511 合金的位错亚结构演变及一种唯象本构模型
Materials (Basel). 2023 Apr 27;16(9):3430. doi: 10.3390/ma16093430.