• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于局部信息论目标函数的可解释神经学习通用框架。

A general framework for interpretable neural learning based on local information-theoretic goal functions.

作者信息

Makkeh Abdullah, Graetz Marcel, Schneider Andreas C, Ehrlich David A, Priesemann Viola, Wibral Michael

机构信息

Department of Data-driven Analysis of Biological Networks, Göttingen Campus Institute for Dynamics of Biological Networks, University of Göttingen, Göttingen 37077, Germany.

Complex Systems Theory, Max Planck Institute for Dynamics and Self-Organization, Göttingen 37077, Germany.

出版信息

Proc Natl Acad Sci U S A. 2025 Mar 11;122(10):e2408125122. doi: 10.1073/pnas.2408125122. Epub 2025 Mar 5.

DOI:10.1073/pnas.2408125122
PMID:40042906
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11912414/
Abstract

Despite the impressive performance of biological and artificial networks, an intuitive understanding of how their local learning dynamics contribute to network-level task solutions remains a challenge to this date. Efforts to bring learning to a more local scale indeed lead to valuable insights, however, a general constructive approach to describe local learning goals that is both interpretable and adaptable across diverse tasks is still missing. We have previously formulated a local information processing goal that is highly adaptable and interpretable for a model neuron with compartmental structure. Building on recent advances in Partial Information Decomposition (PID), we here derive a corresponding parametric local learning rule, which allows us to introduce "infomorphic" neural networks. We demonstrate the versatility of these networks to perform tasks from supervised, unsupervised, and memory learning. By leveraging the interpretable nature of the PID framework, infomorphic networks represent a valuable tool to advance our understanding of the intricate structure of local learning.

摘要

尽管生物网络和人工网络表现出色,但迄今为止,直观理解它们的局部学习动态如何促成网络层面的任务解决方案仍是一项挑战。将学习带到更局部尺度的努力确实带来了有价值的见解,然而,一种既具有可解释性又能适用于各种不同任务的描述局部学习目标的通用建设性方法仍然缺失。我们之前为具有隔室结构的模型神经元制定了一个高度可适应且可解释的局部信息处理目标。基于部分信息分解(PID)的最新进展,我们在此推导出一个相应的参数化局部学习规则,这使我们能够引入“信息同构”神经网络。我们展示了这些网络在执行监督学习、无监督学习和记忆学习任务方面的通用性。通过利用PID框架的可解释性,信息同构网络是推进我们对局部学习复杂结构理解的宝贵工具。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/10ee/11912414/54a982b7c86b/pnas.2408125122fig04.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/10ee/11912414/0d8aa3e5f36d/pnas.2408125122fig01.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/10ee/11912414/a67c9cbbe482/pnas.2408125122fig02.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/10ee/11912414/baf110b9d2b0/pnas.2408125122fig03.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/10ee/11912414/54a982b7c86b/pnas.2408125122fig04.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/10ee/11912414/0d8aa3e5f36d/pnas.2408125122fig01.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/10ee/11912414/a67c9cbbe482/pnas.2408125122fig02.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/10ee/11912414/baf110b9d2b0/pnas.2408125122fig03.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/10ee/11912414/54a982b7c86b/pnas.2408125122fig04.jpg

相似文献

1
A general framework for interpretable neural learning based on local information-theoretic goal functions.基于局部信息论目标函数的可解释神经学习通用框架。
Proc Natl Acad Sci U S A. 2025 Mar 11;122(10):e2408125122. doi: 10.1073/pnas.2408125122. Epub 2025 Mar 5.
2
Partial information decomposition as a unified approach to the specification of neural goal functions.部分信息分解作为一种统一的方法来指定神经目标函数。
Brain Cogn. 2017 Mar;112:25-38. doi: 10.1016/j.bandc.2015.09.004. Epub 2015 Oct 21.
3
Interpretable deep learning for deconvolutional analysis of neural signals.用于神经信号反卷积分析的可解释深度学习
Neuron. 2025 Apr 16;113(8):1151-1168.e13. doi: 10.1016/j.neuron.2025.02.006. Epub 2025 Mar 12.
4
Learning sequence attractors in recurrent networks with hidden neurons.具有隐藏神经元的递归网络中的学习序列吸引子。
Neural Netw. 2024 Oct;178:106466. doi: 10.1016/j.neunet.2024.106466. Epub 2024 Jun 22.
5
Probability density function learning by unsupervised neurons.通过无监督神经元进行概率密度函数学习。
Int J Neural Syst. 2001 Oct;11(5):399-417. doi: 10.1142/S0129065701000898.
6
Supervised Learning in Spiking Neural Networks for Precise Temporal Encoding.用于精确时间编码的脉冲神经网络中的监督学习。
PLoS One. 2016 Aug 17;11(8):e0161335. doi: 10.1371/journal.pone.0161335. eCollection 2016.
7
Synergistic information supports modality integration and flexible learning in neural networks solving multiple tasks.协同信息支持神经网络在解决多项任务时的模态整合和灵活学习。
PLoS Comput Biol. 2024 Jun 3;20(6):e1012178. doi: 10.1371/journal.pcbi.1012178. eCollection 2024 Jun.
8
Supervised learning in spiking neural networks: A review of algorithms and evaluations.监督学习在尖峰神经网络中的应用:算法和评估综述。
Neural Netw. 2020 May;125:258-280. doi: 10.1016/j.neunet.2020.02.011. Epub 2020 Feb 25.
9
An unsupervised STDP-based spiking neural network inspired by biologically plausible learning rules and connections.一种基于无监督 STDP 的尖峰神经网络,灵感来自于具有生物学合理性的学习规则和连接。
Neural Netw. 2023 Aug;165:799-808. doi: 10.1016/j.neunet.2023.06.019. Epub 2023 Jun 22.
10
MARBLE: interpretable representations of neural population dynamics using geometric deep learning.MARBLE:使用几何深度学习的神经群体动力学可解释表示。
Nat Methods. 2025 Mar;22(3):612-620. doi: 10.1038/s41592-024-02582-2. Epub 2025 Feb 17.

本文引用的文献

1
Partial information decomposition for continuous variables based on shared exclusions: Analytical formulation and estimation.基于共享互斥的连续变量部分信息分解:解析公式与估计
Phys Rev E. 2024 Jul;110(1-1):014115. doi: 10.1103/PhysRevE.110.014115.
2
A synergistic workspace for human consciousness revealed by Integrated Information Decomposition.综合信息分解揭示的人类意识协同工作空间。
Elife. 2024 Jul 18;12:RP88173. doi: 10.7554/eLife.88173.
3
Synergistic information supports modality integration and flexible learning in neural networks solving multiple tasks.
协同信息支持神经网络在解决多项任务时的模态整合和灵活学习。
PLoS Comput Biol. 2024 Jun 3;20(6):e1012178. doi: 10.1371/journal.pcbi.1012178. eCollection 2024 Jun.
4
Bio-inspired, task-free continual learning through activity regularization.受生物启发的、无需任务的通过活动正则化的持续学习。
Biol Cybern. 2023 Oct;117(4-5):345-361. doi: 10.1007/s00422-023-00973-w. Epub 2023 Aug 17.
5
Distinctive properties of biological neural networks and recent advances in bottom-up approaches toward a better biologically plausible neural network.生物神经网络的独特特性以及自下而上构建更具生物学合理性神经网络方法的最新进展。
Front Comput Neurosci. 2023 Jun 28;17:1092185. doi: 10.3389/fncom.2023.1092185. eCollection 2023.
6
Biologically-Based Computation: How Neural Details and Dynamics Are Suited for Implementing a Variety of Algorithms.基于生物学的计算:神经细节与动力学如何适用于实现各种算法。
Brain Sci. 2023 Jan 31;13(2):245. doi: 10.3390/brainsci13020245.
7
Computational methods to study information processing in neural circuits.研究神经回路中信息处理的计算方法。
Comput Struct Biotechnol J. 2023 Jan 11;21:910-922. doi: 10.1016/j.csbj.2023.01.009. eCollection 2023.
8
Information-processing dynamics in neural networks of macaque cerebral cortex reflect cognitive state and behavior.灵长类大脑皮层神经网络的信息处理动力学反映了认知状态和行为。
Proc Natl Acad Sci U S A. 2023 Jan 10;120(2):e2207677120. doi: 10.1073/pnas.2207677120. Epub 2023 Jan 5.
9
Where is the error? Hierarchical predictive coding through dendritic error computation.哪里错了?通过树突错误计算进行分层预测编码。
Trends Neurosci. 2023 Jan;46(1):45-59. doi: 10.1016/j.tins.2022.09.007. Epub 2022 Nov 18.
10
Interpretable Artificial Intelligence through Locality Guided Neural Networks.基于局部导向神经网络的可解释人工智能
Neural Netw. 2022 Nov;155:58-73. doi: 10.1016/j.neunet.2022.08.009. Epub 2022 Aug 15.