• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于动作的强化学习对比表示。

Action-driven contrastive representation for reinforcement learning.

机构信息

Graduate School of Artificial Intelligence, Seoul National University, Seoul, Republic of Korea.

Department of Electrical and Computer Engineering, Seoul National University, Seoul, Republic of Korea.

出版信息

PLoS One. 2022 Mar 18;17(3):e0265456. doi: 10.1371/journal.pone.0265456. eCollection 2022.

DOI:10.1371/journal.pone.0265456
PMID:35303031
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8932622/
Abstract

In reinforcement learning, reward-driven feature learning directly from high-dimensional images faces two challenges: sample-efficiency for solving control tasks and generalization to unseen observations. In prior works, these issues have been addressed through learning representation from pixel inputs. However, their representation faced the limitations of being vulnerable to the high diversity inherent in environments or not taking the characteristics for solving control tasks. To attenuate these phenomena, we propose the novel contrastive representation method, Action-Driven Auxiliary Task (ADAT), which forces a representation to concentrate on essential features for deciding actions and ignore control-irrelevant details. In the augmented state-action dictionary of ADAT, the agent learns representation to maximize agreement between observations sharing the same actions. The proposed method significantly outperforms model-free and model-based algorithms in the Atari and OpenAI ProcGen, widely used benchmarks for sample-efficiency and generalization.

摘要

在强化学习中,直接从高维图像中进行奖励驱动的特征学习面临两个挑战:解决控制任务的样本效率和对未见观测的泛化能力。在之前的工作中,这些问题已经通过从像素输入中学习表示来解决。然而,它们的表示存在易受环境固有多样性影响或不考虑解决控制任务的特点的局限性。为了减轻这些现象,我们提出了新颖的对比表示方法,即动作驱动辅助任务 (ADAT),它迫使表示集中在决定动作的基本特征上,忽略与控制无关的细节。在 ADAT 的增强状态-动作字典中,代理学习表示,以最大化具有相同动作的观察之间的一致性。所提出的方法在 Atari 和 OpenAI ProcGen 等广泛使用的样本效率和泛化基准中,显著优于无模型和基于模型的算法。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7760/8932622/ee67a0a42d4f/pone.0265456.g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7760/8932622/45bbfe5bc9a8/pone.0265456.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7760/8932622/19ca385c3422/pone.0265456.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7760/8932622/1c4597a80787/pone.0265456.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7760/8932622/c00e31d92b82/pone.0265456.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7760/8932622/d33d32661e02/pone.0265456.g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7760/8932622/ee67a0a42d4f/pone.0265456.g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7760/8932622/45bbfe5bc9a8/pone.0265456.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7760/8932622/19ca385c3422/pone.0265456.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7760/8932622/1c4597a80787/pone.0265456.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7760/8932622/c00e31d92b82/pone.0265456.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7760/8932622/d33d32661e02/pone.0265456.g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7760/8932622/ee67a0a42d4f/pone.0265456.g006.jpg

相似文献

1
Action-driven contrastive representation for reinforcement learning.基于动作的强化学习对比表示。
PLoS One. 2022 Mar 18;17(3):e0265456. doi: 10.1371/journal.pone.0265456. eCollection 2022.
2
Visual Pretraining via Contrastive Predictive Model for Pixel-Based Reinforcement Learning.基于像素的强化学习的对比预测模型的视觉预训练。
Sensors (Basel). 2022 Aug 29;22(17):6504. doi: 10.3390/s22176504.
3
STACoRe: Spatio-temporal and action-based contrastive representations for reinforcement learning in Atari.STACoRe:用于雅达利强化学习的基于时空和动作对比的表示方法。
Neural Netw. 2023 Mar;160:1-11. doi: 10.1016/j.neunet.2022.12.018. Epub 2022 Dec 29.
4
Masked Contrastive Representation Learning for Reinforcement Learning.用于强化学习的掩码对比表示学习
IEEE Trans Pattern Anal Mach Intell. 2023 Mar;45(3):3421-3433. doi: 10.1109/TPAMI.2022.3176413. Epub 2023 Feb 3.
5
Sequential action-induced invariant representation for reinforcement learning.强化学习中的序贯动作诱导不变表示。
Neural Netw. 2024 Nov;179:106579. doi: 10.1016/j.neunet.2024.106579. Epub 2024 Jul 26.
6
Multimodal information bottleneck for deep reinforcement learning with multiple sensors.多模态信息瓶颈用于多传感器的深度强化学习。
Neural Netw. 2024 Aug;176:106347. doi: 10.1016/j.neunet.2024.106347. Epub 2024 Apr 27.
7
Multiple Self-Supervised Auxiliary Tasks for Target-Driven Visual Navigation Using Deep Reinforcement Learning.基于深度强化学习的目标驱动视觉导航的多自监督辅助任务
Entropy (Basel). 2023 Jun 30;25(7):1007. doi: 10.3390/e25071007.
8
Generative subgoal oriented multi-agent reinforcement learning through potential field.基于势场的面向生成子目标的多智能体强化学习。
Neural Netw. 2024 Nov;179:106552. doi: 10.1016/j.neunet.2024.106552. Epub 2024 Jul 17.
9
Discovering diverse solutions in deep reinforcement learning by maximizing state-action-based mutual information.通过最大化基于状态-动作的互信息在深度强化学习中发现多样的解决方案。
Neural Netw. 2022 Aug;152:90-104. doi: 10.1016/j.neunet.2022.04.009. Epub 2022 Apr 16.
10
LJIR: Learning Joint-Action Intrinsic Reward in cooperative multi-agent reinforcement learning.LJIR:在合作多智能体强化学习中学习联合行动内在奖励
Neural Netw. 2023 Oct;167:450-459. doi: 10.1016/j.neunet.2023.08.016. Epub 2023 Aug 22.