• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

深度强化学习中离策略演员-评论家的相对重要性采样

Relative importance sampling for off-policy actor-critic in deep reinforcement learning.

作者信息

Humayoo Mahammad, Zheng Gengzhong, Dong Xiaoqing, Miao Liming, Qiu Shuwei, Zhou Zexun, Wang Peitao, Ullah Zakir, Junejo Naveed Ur Rehman, Cheng Xueqi

机构信息

Hanshan Normal University, Chaozhou, 521041, China.

CAS Key Laboratory of Network Data Science and Technology, Institute of Computing Technology, CAS, Beijing, 100190, China.

出版信息

Sci Rep. 2025 Apr 24;15(1):14349. doi: 10.1038/s41598-025-96201-5.

DOI:10.1038/s41598-025-96201-5
PMID:40274865
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12022357/
Abstract

Off-policy learning exhibits greater instability when compared to on-policy learning in reinforcement learning (RL). The difference in probability distribution between the target policy (π) and the behavior policy (b) is a major cause of instability. High variance also originates from distributional mismatch. The variation between the target policy's distribution and the behavior policy's distribution can be reduced using importance sampling (IS). However, importance sampling has high variance, which is exacerbated in sequential scenarios. We propose a smooth form of importance sampling, specifically relative importance sampling (RIS), which mitigates variance and stabilizes learning. To control variance, we alter the value of the smoothness parameter [Formula: see text] in RIS. We develop the first model-free relative importance sampling off-policy actor-critic (RIS-off-PAC) algorithms in RL using this strategy. Our method uses a network to generate the target policy (actor) and evaluate the current policy (π) using a value function (critic) based on behavior policy samples. Our algorithms are trained using behavior policy action values in the reward function, not target policy ones. Both the actor and critic are trained using deep neural networks. Our methods performed better than or equal to several state-of-the-art RL benchmarks on OpenAI Gym challenges and synthetic datasets.

摘要

与强化学习(RL)中的策略学习相比,离策略学习表现出更大的不稳定性。目标策略(π)和行为策略(b)之间概率分布的差异是不稳定性的主要原因。高方差也源于分布不匹配。使用重要性采样(IS)可以减少目标策略分布与行为策略分布之间的差异。然而,重要性采样具有高方差,在顺序场景中会加剧。我们提出了一种平滑形式的重要性采样,即相对重要性采样(RIS),它可以减轻方差并稳定学习。为了控制方差,我们在RIS中改变平滑度参数[公式:见正文]的值。我们使用此策略开发了RL中第一个无模型相对重要性采样离策略演员-评论家(RIS-off-PAC)算法。我们的方法使用一个网络来生成目标策略(演员),并基于行为策略样本使用价值函数(评论家)评估当前策略(π)。我们的算法在奖励函数中使用行为策略动作值进行训练,而不是目标策略动作值。演员和评论家都使用深度神经网络进行训练。在OpenAI Gym挑战和合成数据集上,我们的方法表现优于或等同于几个最先进的RL基准。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa5a/12022357/dc8116054439/41598_2025_96201_Fig13_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa5a/12022357/60a5f03f5987/41598_2025_96201_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa5a/12022357/2f8026391031/41598_2025_96201_Figa_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa5a/12022357/e67f68ad81c8/41598_2025_96201_Figb_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa5a/12022357/a980faa77903/41598_2025_96201_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa5a/12022357/a558688faa3d/41598_2025_96201_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa5a/12022357/86da20071bdc/41598_2025_96201_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa5a/12022357/ee706dc71cc3/41598_2025_96201_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa5a/12022357/a252e988148d/41598_2025_96201_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa5a/12022357/a5d4494c1c6b/41598_2025_96201_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa5a/12022357/b7ade1534680/41598_2025_96201_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa5a/12022357/4d3b04d2fa2a/41598_2025_96201_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa5a/12022357/210118013ec3/41598_2025_96201_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa5a/12022357/05e30cd6dca2/41598_2025_96201_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa5a/12022357/77c1047b196e/41598_2025_96201_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa5a/12022357/dc8116054439/41598_2025_96201_Fig13_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa5a/12022357/60a5f03f5987/41598_2025_96201_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa5a/12022357/2f8026391031/41598_2025_96201_Figa_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa5a/12022357/e67f68ad81c8/41598_2025_96201_Figb_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa5a/12022357/a980faa77903/41598_2025_96201_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa5a/12022357/a558688faa3d/41598_2025_96201_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa5a/12022357/86da20071bdc/41598_2025_96201_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa5a/12022357/ee706dc71cc3/41598_2025_96201_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa5a/12022357/a252e988148d/41598_2025_96201_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa5a/12022357/a5d4494c1c6b/41598_2025_96201_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa5a/12022357/b7ade1534680/41598_2025_96201_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa5a/12022357/4d3b04d2fa2a/41598_2025_96201_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa5a/12022357/210118013ec3/41598_2025_96201_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa5a/12022357/05e30cd6dca2/41598_2025_96201_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa5a/12022357/77c1047b196e/41598_2025_96201_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fa5a/12022357/dc8116054439/41598_2025_96201_Fig13_HTML.jpg

相似文献

1
Relative importance sampling for off-policy actor-critic in deep reinforcement learning.深度强化学习中离策略演员-评论家的相对重要性采样
Sci Rep. 2025 Apr 24;15(1):14349. doi: 10.1038/s41598-025-96201-5.
2
Stochastic Integrated Actor-Critic for Deep Reinforcement Learning.用于深度强化学习的随机集成演员-评论家算法
IEEE Trans Neural Netw Learn Syst. 2024 May;35(5):6654-6666. doi: 10.1109/TNNLS.2022.3212273. Epub 2024 May 2.
3
Distributional Soft Actor-Critic: Off-Policy Reinforcement Learning for Addressing Value Estimation Errors.分布软演员-评论家:用于解决价值估计误差的离策略强化学习
IEEE Trans Neural Netw Learn Syst. 2022 Nov;33(11):6584-6598. doi: 10.1109/TNNLS.2021.3082568. Epub 2022 Oct 27.
4
Meta attention for Off-Policy Actor-Critic.用于离策略演员-评论家的元注意力机制
Neural Netw. 2023 Jun;163:86-96. doi: 10.1016/j.neunet.2023.03.024. Epub 2023 Mar 28.
5
Episodic Memory-Double Actor-Critic Twin Delayed Deep Deterministic Policy Gradient.情景记忆 - 双智能体 - 评论家双延迟深度确定性策略梯度
Neural Netw. 2025 Jul;187:107286. doi: 10.1016/j.neunet.2025.107286. Epub 2025 Feb 27.
6
An actor-critic framework based on deep reinforcement learning for addressing flexible job shop scheduling problems.一种基于深度强化学习的演员-评论家框架,用于解决柔性作业车间调度问题。
Math Biosci Eng. 2024 Jan;21(1):1445-1471. doi: 10.3934/mbe.2024062. Epub 2022 Dec 28.
7
Boosting On-Policy Actor-Critic With Shallow Updates in Critic.通过在评论家网络中进行浅层更新来增强策略上的演员-评论家算法
IEEE Trans Neural Netw Learn Syst. 2025 Mar;36(3):5644-5653. doi: 10.1109/TNNLS.2024.3378913. Epub 2025 Feb 28.
8
Broad Critic Deep Actor Reinforcement Learning for Continuous Control.用于连续控制的广义批评深度演员强化学习
IEEE Trans Neural Netw Learn Syst. 2025 Apr 8;PP. doi: 10.1109/TNNLS.2025.3554082.
9
Distributional Soft Actor-Critic With Three Refinements.具有三种改进的分布软演员-评论家算法
IEEE Trans Pattern Anal Mach Intell. 2025 May;47(5):3935-3946. doi: 10.1109/TPAMI.2025.3537087. Epub 2025 Apr 8.
10
Mild Policy Evaluation for Offline Actor-Critic.离线策略梯度算法的温和策略评估
IEEE Trans Neural Netw Learn Syst. 2024 Dec;35(12):17950-17964. doi: 10.1109/TNNLS.2023.3309906. Epub 2024 Dec 2.

引用本文的文献

1
Reinforcement Learning-Based Nonlinear Model Predictive Controller for a Jacketed Reactor: A Machine Learning Concept Validation Using Jetson Orin.基于强化学习的夹套式反应器非线性模型预测控制器:使用Jetson Orin的机器学习概念验证
ACS Omega. 2025 Jul 9;10(28):30864-30878. doi: 10.1021/acsomega.5c03219. eCollection 2025 Jul 22.

本文引用的文献

1
Mastering the game of Go without human knowledge.无需人类知识即可掌握围棋游戏。
Nature. 2017 Oct 18;550(7676):354-359. doi: 10.1038/nature24270.
2
Mastering the game of Go with deep neural networks and tree search.用深度神经网络和树搜索掌握围棋游戏。
Nature. 2016 Jan 28;529(7587):484-9. doi: 10.1038/nature16961.
3
Relative density-ratio estimation for robust distribution comparison.用于稳健分布比较的相对密度比估计。
Neural Comput. 2013 May;25(5):1324-70. doi: 10.1162/NECO_a_00442.
4
Adaptive importance sampling for value function approximation in off-policy reinforcement learning.基于非策略强化学习的价值函数逼近的自适应重要性采样。
Neural Netw. 2009 Dec;22(10):1399-410. doi: 10.1016/j.neunet.2009.01.002. Epub 2009 Jan 23.