• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

经典赫布学习使前馈网络在具有挑战性的强化学习任务中具有足够的适应性。

Classic Hebbian learning endows feed-forward networks with sufficient adaptability in challenging reinforcement learning tasks.

机构信息

Okinawa Institute of Science and Technology Graduate University, Onna, Okinawa, Japan.

出版信息

J Neurophysiol. 2021 Jun 1;125(6):2034-2037. doi: 10.1152/jn.00712.2020. Epub 2021 Apr 28.

DOI:10.1152/jn.00712.2020
PMID:33909499
Abstract

A common pitfall of current reinforcement learning agents implemented in computational models is in their inadaptability postoptimization. Najarro and Risi [Najarro E, Risi S. . 2020: 20719-20731, 2020] demonstrate how such adaptability may be salvaged in artificial feed-forward networks by optimizing coefficients of classic Hebbian rules to dynamically control the networks' weights instead of optimizing the weights directly. Although such models fail to capture many important neurophysiological details, allying the fields of neuroscience and artificial intelligence in this way bears many fruits for both fields, especially when computational models engage with topics with a rich history in neuroscience such as Hebbian plasticity.

摘要

当前在计算模型中实现的强化学习代理的一个常见陷阱是优化后不适应。Najarro 和 Risi [Najarro E,Risi S.. 2020:20719-20731,2020] 演示了如何通过优化经典赫布规则的系数来动态控制网络的权重,而不是直接优化权重,从而在人工前馈网络中挽救这种适应性。尽管这些模型未能捕捉到许多重要的神经生理学细节,但以这种方式将神经科学和人工智能领域结合起来,对这两个领域都有很多好处,特别是当计算模型涉及神经科学中具有丰富历史的主题,如赫布可塑性时。

相似文献

1
Classic Hebbian learning endows feed-forward networks with sufficient adaptability in challenging reinforcement learning tasks.经典赫布学习使前馈网络在具有挑战性的强化学习任务中具有足够的适应性。
J Neurophysiol. 2021 Jun 1;125(6):2034-2037. doi: 10.1152/jn.00712.2020. Epub 2021 Apr 28.
2
Exploration in neo-Hebbian reinforcement learning: Computational approaches to the exploration-exploitation balance with bio-inspired neural networks.神经拟态强化学习探索:基于生物启发神经网络的探索-利用平衡计算方法。
Neural Netw. 2022 Jul;151:16-33. doi: 10.1016/j.neunet.2022.03.021. Epub 2022 Mar 23.
3
Neural circuits for learning context-dependent associations of stimuli.学习刺激上下文相关关联的神经回路。
Neural Netw. 2018 Nov;107:48-60. doi: 10.1016/j.neunet.2018.07.018. Epub 2018 Aug 13.
4
A differential Hebbian framework for biologically-plausible motor control.一种用于生物上合理的运动控制的差分海布框架。
Neural Netw. 2022 Jun;150:237-258. doi: 10.1016/j.neunet.2022.03.002. Epub 2022 Mar 10.
5
Deep reinforcement learning to study spatial navigation, learning and memory in artificial and biological agents.深度强化学习用于研究人工和生物智能体中的空间导航、学习与记忆。
Biol Cybern. 2021 Apr;115(2):131-134. doi: 10.1007/s00422-021-00862-0. Epub 2021 Feb 9.
6
Projective simulation for artificial intelligence.人工智能的投影模拟。
Sci Rep. 2012;2:400. doi: 10.1038/srep00400. Epub 2012 May 15.
7
Learning offline: memory replay in biological and artificial reinforcement learning.离线学习:生物强化学习和人工强化学习中的记忆重放。
Trends Neurosci. 2021 Oct;44(10):808-821. doi: 10.1016/j.tins.2021.07.007. Epub 2021 Sep 1.
8
Reward-Modulated Hebbian Plasticity as Leverage for Partially Embodied Control in Compliant Robotics.奖励调制的赫布可塑性作为柔顺机器人中部分实体控制的手段
Front Neurorobot. 2015 Aug 17;9:9. doi: 10.3389/fnbot.2015.00009. eCollection 2015.
9
Emergence and reconfiguration of modular structure for artificial neural networks during continual familiarity detection.人工神经网络在持续熟悉度检测过程中模块化结构的出现和重新配置。
Sci Adv. 2024 Jul 26;10(30):eadm8430. doi: 10.1126/sciadv.adm8430.
10
Deep Reinforcement Learning With Modulated Hebbian Plus Q-Network Architecture.具有调制赫布型加Q网络架构的深度强化学习
IEEE Trans Neural Netw Learn Syst. 2022 May;33(5):2045-2056. doi: 10.1109/TNNLS.2021.3110281. Epub 2022 May 2.