• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

不变性、编码与泛化:用神经网络学习身份效应。

Invariance, Encodings, and Generalization: Learning Identity Effects With Neural Networks.

机构信息

Department of Mathematics and Statistics, Concordia University, Montreal, Quebec, H3G 1M8, Canada

Department of Mathematics, Simon Fraser University, Burnaby, British Columbia, V5A 1S6, Canada

出版信息

Neural Comput. 2022 Jul 14;34(8):1756-1789. doi: 10.1162/neco_a_01510.

DOI:10.1162/neco_a_01510
PMID:35798322
Abstract

Often in language and other areas of cognition, whether two components of an object are identical or not determines if it is well formed. We call such constraints identity effects. When developing a system to learn well-formedness from examples, it is easy enough to build in an identity effect. But can identity effects be learned from the data without explicit guidance? We provide a framework in which we can rigorously prove that algorithms satisfying simple criteria cannot make the correct inference. We then show that a broad class of learning algorithms, including deep feedforward neural networks trained via gradient-based algorithms (such as stochastic gradient descent or the Adam method), satisfies our criteria, dependent on the encoding of inputs. In some broader circumstances, we are able to provide adversarial examples that the network necessarily classifies incorrectly. Finally, we demonstrate our theory with computational experiments in which we explore the effect of different input encodings on the ability of algorithms to generalize to novel inputs. This allows us to show similar effects to those predicted by theory for more realistic methods that violate some of the conditions of our theoretical results.

摘要

在语言和其他认知领域中,对象的两个组成部分是否相同决定了它是否具有良好的形式。我们将这种约束称为身份效应。在开发一个从示例中学习良好形式的系统时,很容易在其中构建身份效应。但是,身份效应可以从数据中学习而无需显式指导吗?我们提供了一个框架,可以在其中严格证明满足简单标准的算法不能做出正确的推断。然后,我们表明,包括通过基于梯度的算法(例如随机梯度下降或 Adam 方法)训练的深度前馈神经网络在内的广泛的学习算法满足我们的标准,这取决于输入的编码。在某些更广泛的情况下,我们能够提供对抗性示例,网络必然会将其错误分类。最后,我们通过计算实验证明了我们的理论,其中我们探索了不同输入编码对算法将新输入推广到新输入的能力的影响。这使我们能够展示与理论预测相似的效果,对于更现实的方法,这些方法违反了我们理论结果的某些条件。

相似文献

1
Invariance, Encodings, and Generalization: Learning Identity Effects With Neural Networks.不变性、编码与泛化:用神经网络学习身份效应。
Neural Comput. 2022 Jul 14;34(8):1756-1789. doi: 10.1162/neco_a_01510.
2
Generalization limits of Graph Neural Networks in identity effects learning.图神经网络在身份效应学习中的泛化极限
Neural Netw. 2025 Jan;181:106793. doi: 10.1016/j.neunet.2024.106793. Epub 2024 Oct 10.
3
Stability analysis of stochastic gradient descent for homogeneous neural networks and linear classifiers.随机梯度下降在同质神经网络和线性分类器中的稳定性分析。
Neural Netw. 2023 Jul;164:382-394. doi: 10.1016/j.neunet.2023.04.028. Epub 2023 Apr 25.
4
Engineering a Less Artificial Intelligence.设计一个不那么人工智能的人工智能。
Neuron. 2019 Sep 25;103(6):967-979. doi: 10.1016/j.neuron.2019.08.034.
5
Strengthening transferability of adversarial examples by adaptive inertia and amplitude spectrum dropout.通过自适应惯性和幅度谱丢弃增强对抗样本的可转移性。
Neural Netw. 2023 Aug;165:925-937. doi: 10.1016/j.neunet.2023.06.031. Epub 2023 Jun 30.
6
Learning matrix factorization with scalable distance metric and regularizer.使用可扩展距离度量和正则化器学习矩阵分解。
Neural Netw. 2023 Apr;161:254-266. doi: 10.1016/j.neunet.2023.01.034. Epub 2023 Feb 3.
7
Why ResNet Works? Residuals Generalize.为什么ResNet有效?残差能够泛化。
IEEE Trans Neural Netw Learn Syst. 2020 Dec;31(12):5349-5362. doi: 10.1109/TNNLS.2020.2966319. Epub 2020 Nov 30.
8
Decentralized stochastic sharpness-aware minimization algorithm.去中心化随机锐化感知最小化算法。
Neural Netw. 2024 Aug;176:106325. doi: 10.1016/j.neunet.2024.106325. Epub 2024 Apr 17.
9
Vulnerability of classifiers to evolutionary generated adversarial examples.分类器对进化生成对抗样例的脆弱性。
Neural Netw. 2020 Jul;127:168-181. doi: 10.1016/j.neunet.2020.04.015. Epub 2020 Apr 20.
10
VISAL-A novel learning strategy to address class imbalance.VISAL——一种解决类别不平衡问题的新型学习策略。
Neural Netw. 2023 Apr;161:178-184. doi: 10.1016/j.neunet.2023.01.015. Epub 2023 Jan 20.