• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

强化学习中时态逻辑公式的转移

Transfer of Temporal Logic Formulas in Reinforcement Learning.

作者信息

Xu Zhe, Topcu Ufuk

机构信息

University of Texas at Austin.

出版信息

IJCAI (U S). 2019;28:4010-4018. doi: 10.24963/ijcai.2019/557.

DOI:10.24963/ijcai.2019/557
PMID:31631953
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC6800702/
Abstract

Transferring high-level knowledge from a to a is an effective way to expedite reinforcement learning (RL). For example, propositional logic and first-order logic have been used as representations of such knowledge. We study the transfer of knowledge between tasks in which the timing of the events matters. We call such tasks . We concretize similarity between temporal tasks through a notion of , and develop a transfer learning approach between different yet temporal tasks. We first propose an inference technique to extract (MITL) formulas in from labeled trajectories collected in RL of the two tasks. If logical transferability is identified through this inference, we construct a timed automaton for each of the inferred MITL formulas from both tasks. We perform RL on the which includes the locations and clock valuations of the timed automata for the source task. We then establish mappings between the corresponding components (clocks, locations, etc.) of the timed automata from the two tasks, and transfer the based on the established mappings. Finally, we perform RL on the for the target task, starting with the transferred extended Q-functions. Our implementation results show, depending on how similar the source task and the target task are, that the sampling efficiency for the target task can be improved by up to one order of magnitude by performing RL in the extended state space, and further improved by up to another order of magnitude using the transferred extended Q-functions.

摘要

将高级知识从一个[未提及的内容]转移到另一个[未提及的内容]是加速强化学习(RL)的有效方法。例如,命题逻辑和一阶逻辑已被用作此类知识的表示形式。我们研究事件时间很重要的任务之间的知识转移。我们将此类任务称为[未提及的内容]。我们通过[未提及的内容]的概念具体化时间任务之间的相似性,并开发一种在不同但[未提及的内容]的时间任务之间的迁移学习方法。我们首先提出一种推理技术,从在两个任务的RL中收集的标记轨迹中提取[未提及的内容](MITL)公式。如果通过此推理确定逻辑可转移性,我们为两个任务中推断出的每个MITL公式构造一个定时自动机。我们在包括源任务定时自动机的位置和时钟估值的[未提及的内容]上执行RL。然后,我们在两个任务的定时自动机的相应组件(时钟、位置等)之间建立映射,并基于已建立的映射转移[未提及的内容]。最后,我们从转移的扩展Q函数开始,在目标任务的[未提及的内容]上执行RL。我们的实现结果表明,根据源任务和目标任务的相似程度,通过在扩展状态空间中执行RL,目标任务的采样效率可以提高多达一个数量级,并且使用转移的扩展Q函数可以进一步提高多达另一个数量级。

相似文献

1
Transfer of Temporal Logic Formulas in Reinforcement Learning.强化学习中时态逻辑公式的转移
IJCAI (U S). 2019;28:4010-4018. doi: 10.24963/ijcai.2019/557.
2
Context transfer in reinforcement learning using action-value functions.基于动作值函数的强化学习中的上下文转移
Comput Intell Neurosci. 2014;2014:428567. doi: 10.1155/2014/428567. Epub 2014 Dec 31.
3
Compositional RL Agents That Follow Language Commands in Temporal Logic.遵循时态逻辑语言命令的组合强化学习智能体
Front Robot AI. 2021 Jul 19;8:689550. doi: 10.3389/frobt.2021.689550. eCollection 2021.
4
Bounded Model Checking for Metric Temporal Logic Properties of Timed Automata with Digital Clocks.带数字时钟的时间自动机的度量时态逻辑属性的有界模型检查。
Sensors (Basel). 2022 Dec 6;22(23):9552. doi: 10.3390/s22239552.
5
Context-Based Meta-Reinforcement Learning With Bayesian Nonparametric Models.基于上下文的贝叶斯非参数模型元强化学习
IEEE Trans Pattern Anal Mach Intell. 2024 Oct;46(10):6948-6965. doi: 10.1109/TPAMI.2024.3386780. Epub 2024 Sep 5.
6
Exploration With Task Information for Meta Reinforcement Learning.基于任务信息的元强化学习探索
IEEE Trans Neural Netw Learn Syst. 2023 Aug;34(8):4033-4046. doi: 10.1109/TNNLS.2021.3121432. Epub 2023 Aug 4.
7
State-Temporal Compression in Reinforcement Learning With the Reward-Restricted Geodesic Metric.
IEEE Trans Pattern Anal Mach Intell. 2022 Sep;44(9):5572-5589. doi: 10.1109/TPAMI.2021.3069005. Epub 2022 Aug 4.
8
Hierarchical clustering optimizes the tradeoff between compositionality and expressivity of task structures for flexible reinforcement learning.层次聚类优化了任务结构的组合性和表现力之间的权衡,以实现灵活的强化学习。
Artif Intell. 2022 Nov;312. doi: 10.1016/j.artint.2022.103770. Epub 2022 Aug 5.
9
Curriculum-Based Asymmetric Multi-Task Reinforcement Learning.基于课程的非对称多任务强化学习。
IEEE Trans Pattern Anal Mach Intell. 2023 Jun;45(6):7258-7269. doi: 10.1109/TPAMI.2022.3223872. Epub 2023 May 5.
10
Safe reinforcement learning under temporal logic with reward design and quantum action selection.基于奖励设计和量子动作选择的时序逻辑下的安全强化学习。
Sci Rep. 2023 Feb 2;13(1):1925. doi: 10.1038/s41598-023-28582-4.

引用本文的文献

1
Robotics Dexterous Grasping: The Methods Based on Point Cloud and Deep Learning.机器人灵巧抓取:基于点云与深度学习的方法
Front Neurorobot. 2021 Jun 9;15:658280. doi: 10.3389/fnbot.2021.658280. eCollection 2021.
2
Control strategies for COVID-19 epidemic with vaccination, shield immunity and quarantine: A metric temporal logic approach.疫苗接种、盾牌免疫和隔离控制 COVID-19 疫情的策略:一种度量时态逻辑方法。
PLoS One. 2021 Mar 5;16(3):e0247660. doi: 10.1371/journal.pone.0247660. eCollection 2021.