• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

探索的内在奖励,无需因观测噪声而产生危害:基于自由能原理的模拟研究。

Intrinsic Rewards for Exploration Without Harm From Observational Noise: A Simulation Study Based on the Free Energy Principle.

机构信息

Cognitive Neurorobotics Research Unit, Okinawa Institute of Science and Technology Graduate University, Onna-san 904-0495, Okinawa, Japan

Neural Computation Unit, Okinawa Institute of Science and Technology Graduate University, Onna-san 904-0495, Okinawa, Japan

出版信息

Neural Comput. 2024 Aug 19;36(9):1854-1885. doi: 10.1162/neco_a_01690.

DOI:10.1162/neco_a_01690
PMID:39106455
Abstract

In reinforcement learning (RL), artificial agents are trained to maximize numerical rewards by performing tasks. Exploration is essential in RL because agents must discover information before exploiting it. Two rewards encouraging efficient exploration are the entropy of action policy and curiosity for information gain. Entropy is well established in the literature, promoting randomized action selection. Curiosity is defined in a broad variety of ways in literature, promoting discovery of novel experiences. One example, prediction error curiosity, rewards agents for discovering observations they cannot accurately predict. However, such agents may be distracted by unpredictable observational noises known as curiosity traps. Based on the free energy principle (FEP), this letter proposes hidden state curiosity, which rewards agents by the KL divergence between the predictive prior and posterior probabilities of latent variables. We trained six types of agents to navigate mazes: baseline agents without rewards for entropy or curiosity and agents rewarded for entropy and/or either prediction error curiosity or hidden state curiosity. We find that entropy and curiosity result in efficient exploration, especially both employed together. Notably, agents with hidden state curiosity demonstrate resilience against curiosity traps, which hinder agents with prediction error curiosity. This suggests implementing the FEP that may enhance the robustness and generalization of RL models, potentially aligning the learning processes of artificial and biological agents.

摘要

在强化学习 (RL) 中,人工智能代理通过执行任务来学习最大化数值奖励。探索在 RL 中至关重要,因为代理必须在利用信息之前发现信息。两种鼓励有效探索的奖励是动作策略的熵和信息增益的好奇心。在文献中,熵被广泛应用于促进随机动作选择。在文献中,好奇心被定义为多种方式,促进对新经验的发现。例如,预测误差好奇心,奖励发现他们无法准确预测的观察结果的代理。然而,此类代理可能会被称为好奇心陷阱的不可预测观察噪声所分散注意力。基于自由能原理 (FEP),本函件提出了隐藏状态好奇心,通过潜在变量的预测先验概率和后验概率之间的 KL 散度来奖励代理。我们训练了六种类型的代理来在迷宫中导航:没有熵或好奇心奖励的基线代理,以及因熵和/或预测误差好奇心或隐藏状态好奇心而获得奖励的代理。我们发现,熵和好奇心导致了有效的探索,特别是两者一起使用时。值得注意的是,具有隐藏状态好奇心的代理对好奇心陷阱具有弹性,这阻碍了具有预测误差好奇心的代理。这表明实施 FEP 可能会增强 RL 模型的鲁棒性和泛化能力,可能会使人工和生物代理的学习过程一致。

相似文献

1
Intrinsic Rewards for Exploration Without Harm From Observational Noise: A Simulation Study Based on the Free Energy Principle.探索的内在奖励,无需因观测噪声而产生危害:基于自由能原理的模拟研究。
Neural Comput. 2024 Aug 19;36(9):1854-1885. doi: 10.1162/neco_a_01690.
2
Contributions of expected learning progress and perceptual novelty to curiosity-driven exploration.预期学习进展和感知新颖性对好奇心驱动探索的贡献。
Cognition. 2022 Aug;225:105119. doi: 10.1016/j.cognition.2022.105119. Epub 2022 Apr 12.
3
Curiosity-driven recommendation strategy for adaptive learning via deep reinforcement learning.基于深度强化学习的好奇心驱动推荐策略,用于自适应学习。
Br J Math Stat Psychol. 2020 Nov;73(3):522-540. doi: 10.1111/bmsp.12199. Epub 2020 Feb 21.
4
Nutrient-Sensitive Reinforcement Learning in Monkeys.猴子的营养敏感强化学习。
J Neurosci. 2023 Mar 8;43(10):1714-1730. doi: 10.1523/JNEUROSCI.0752-22.2022. Epub 2023 Jan 20.
5
LJIR: Learning Joint-Action Intrinsic Reward in cooperative multi-agent reinforcement learning.LJIR:在合作多智能体强化学习中学习联合行动内在奖励
Neural Netw. 2023 Oct;167:450-459. doi: 10.1016/j.neunet.2023.08.016. Epub 2023 Aug 22.
6
Forward and inverse reinforcement learning sharing network weights and hyperparameters.正向和反向强化学习共享网络权重和超参数。
Neural Netw. 2021 Dec;144:138-153. doi: 10.1016/j.neunet.2021.08.017. Epub 2021 Aug 20.
7
Intrinsic motivation, curiosity, and learning: Theory and applications in educational technologies.内在动机、好奇心与学习:教育技术中的理论与应用
Prog Brain Res. 2016;229:257-284. doi: 10.1016/bs.pbr.2016.05.005. Epub 2016 Jul 29.
8
Intrinsically motivated oculomotor exploration guided by uncertainty reduction and conditioned reinforcement in non-human primates.由不确定性降低和条件性强化引导的非人灵长类动物内在动机性眼球运动探索。
Sci Rep. 2016 Feb 3;6:20202. doi: 10.1038/srep20202.
9
Phasic dopamine as a prediction error of intrinsic and extrinsic reinforcements driving both action acquisition and reward maximization: a simulated robotic study.相位多巴胺作为内在和外在强化的预测误差,驱动着动作获取和奖励最大化:一项模拟机器人研究。
Neural Netw. 2013 Mar;39:40-51. doi: 10.1016/j.neunet.2012.12.012. Epub 2013 Jan 14.
10
Decoding reward-curiosity conflict in decision-making from irrational behaviors.从非理性行为解码决策中的奖励-好奇冲突。
Nat Comput Sci. 2023 May;3(5):418-432. doi: 10.1038/s43588-023-00439-w. Epub 2023 May 15.

引用本文的文献

1
Free Energy Projective Simulation (FEPS): Active inference with interpretability.自由能投射模拟(FEPS):具有可解释性的主动推理
PLoS One. 2025 Sep 4;20(9):e0331047. doi: 10.1371/journal.pone.0331047. eCollection 2025.