• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

内核动态策略编程:适用于高维状态机器人系统的强化学习。

Kernel dynamic policy programming: Applicable reinforcement learning to robot systems with high dimensional states.

机构信息

Graduate School of Information Science, Nara Institute of Science and Technology, 8916-5 Takayama, Ikoma, Nara, Japan.

Graduate School of Information Science, Nara Institute of Science and Technology, 8916-5 Takayama, Ikoma, Nara, Japan.

出版信息

Neural Netw. 2017 Oct;94:13-23. doi: 10.1016/j.neunet.2017.06.007. Epub 2017 Jun 29.

DOI:10.1016/j.neunet.2017.06.007
PMID:28732231
Abstract

We propose a new value function approach for model-free reinforcement learning in Markov decision processes involving high dimensional states that addresses the issues of brittleness and intractable computational complexity, therefore rendering the value function approach based reinforcement learning algorithms applicable to high dimensional systems. Our new algorithm, Kernel Dynamic Policy Programming (KDPP) smoothly updates the value function in accordance to the Kullback-Leibler divergence between current and updated policies. Stabilizing the learning in this manner enables the application of the kernel trick to value function approximation, which greatly reduces computational requirements for learning in high dimensional state spaces. The performance of KDPP against other kernel trick based value function approaches is first investigated in a simulated n DOF manipulator reaching task, where only KDPP efficiently learned a viable policy at n=40. As an application to a real world high dimensional robot system, KDPP successfully learned the task of unscrewing a bottle cap via a Pneumatic Artificial Muscle (PAM) driven robotic hand with tactile sensors; a system with a state space of 32 dimensions, while given limited samples and with ordinary computing resources.

摘要

我们提出了一种新的无模型强化学习价值函数方法,用于涉及高维状态的马尔可夫决策过程,解决了脆性和难以计算的计算复杂度问题,从而使基于价值函数的强化学习算法适用于高维系统。我们的新算法,核动态策略规划(KDPP),根据当前策略和更新策略之间的 Kullback-Leibler 散度,平滑地更新价值函数。以这种方式稳定学习,使得核技巧可以应用于价值函数逼近,这大大降低了在高维状态空间中学习的计算要求。KDPP 与其他基于核技巧的价值函数方法的性能在模拟的 n DOF 机械臂到达任务中进行了首次比较,仅 KDPP 在 n=40 时有效地学习了可行的策略。作为对真实世界高维机器人系统的应用,KDPP 成功地学习了通过气动人工肌肉(PAM)驱动的具有触觉传感器的机器人手拧开瓶盖的任务;该系统的状态空间为 32 维,在样本有限且使用普通计算资源的情况下。

相似文献

1
Kernel dynamic policy programming: Applicable reinforcement learning to robot systems with high dimensional states.内核动态策略编程:适用于高维状态机器人系统的强化学习。
Neural Netw. 2017 Oct;94:13-23. doi: 10.1016/j.neunet.2017.06.007. Epub 2017 Jun 29.
2
A Reinforcement Learning Neural Network for Robotic Manipulator Control.用于机器人操纵器控制的强化学习神经网络
Neural Comput. 2018 Jul;30(7):1983-2004. doi: 10.1162/neco_a_01079. Epub 2018 Apr 13.
3
Kernel temporal differences for neural decoding.用于神经解码的核时间差异
Comput Intell Neurosci. 2015;2015:481375. doi: 10.1155/2015/481375. Epub 2015 Mar 17.
4
Kernel-based least squares policy iteration for reinforcement learning.用于强化学习的基于核的最小二乘策略迭代
IEEE Trans Neural Netw. 2007 Jul;18(4):973-92. doi: 10.1109/TNN.2007.899161.
5
From free energy to expected energy: Improving energy-based value function approximation in reinforcement learning.从自由能到期望能量:改进强化学习中的基于能量的价值函数逼近。
Neural Netw. 2016 Dec;84:17-27. doi: 10.1016/j.neunet.2016.07.013. Epub 2016 Aug 26.
6
Intrinsically motivated reinforcement learning for human-robot interaction in the real-world.基于内在动机的强化学习在真实世界中的人机交互
Neural Netw. 2018 Nov;107:23-33. doi: 10.1016/j.neunet.2018.03.014. Epub 2018 Mar 26.
7
Functional Contour-following via Haptic Perception and Reinforcement Learning.通过触觉感知和强化学习实现功能轮廓跟踪
IEEE Trans Haptics. 2018 Jan-Mar;11(1):61-72. doi: 10.1109/TOH.2017.2753233. Epub 2017 Sep 18.
8
Reinforcement learning of motor skills with policy gradients.基于策略梯度的运动技能强化学习。
Neural Netw. 2008 May;21(4):682-97. doi: 10.1016/j.neunet.2008.02.003. Epub 2008 Apr 26.
9
Spatio-temporal learning with the online finite and infinite echo-state Gaussian processes.基于在线有限和无限回声状态高斯过程的时空学习。
IEEE Trans Neural Netw Learn Syst. 2015 Mar;26(3):522-36. doi: 10.1109/TNNLS.2014.2316291.
10
Parameter-exploring policy gradients.参数探索策略梯度。
Neural Netw. 2010 May;23(4):551-9. doi: 10.1016/j.neunet.2009.12.004. Epub 2009 Dec 16.