Suppr超能文献

强化学习中时态逻辑公式的转移

Transfer of Temporal Logic Formulas in Reinforcement Learning.

作者信息

Xu Zhe, Topcu Ufuk

机构信息

University of Texas at Austin.

出版信息

IJCAI (U S). 2019;28:4010-4018. doi: 10.24963/ijcai.2019/557.

Abstract

Transferring high-level knowledge from a to a is an effective way to expedite reinforcement learning (RL). For example, propositional logic and first-order logic have been used as representations of such knowledge. We study the transfer of knowledge between tasks in which the timing of the events matters. We call such tasks . We concretize similarity between temporal tasks through a notion of , and develop a transfer learning approach between different yet temporal tasks. We first propose an inference technique to extract (MITL) formulas in from labeled trajectories collected in RL of the two tasks. If logical transferability is identified through this inference, we construct a timed automaton for each of the inferred MITL formulas from both tasks. We perform RL on the which includes the locations and clock valuations of the timed automata for the source task. We then establish mappings between the corresponding components (clocks, locations, etc.) of the timed automata from the two tasks, and transfer the based on the established mappings. Finally, we perform RL on the for the target task, starting with the transferred extended Q-functions. Our implementation results show, depending on how similar the source task and the target task are, that the sampling efficiency for the target task can be improved by up to one order of magnitude by performing RL in the extended state space, and further improved by up to another order of magnitude using the transferred extended Q-functions.

摘要

将高级知识从一个[未提及的内容]转移到另一个[未提及的内容]是加速强化学习(RL)的有效方法。例如,命题逻辑和一阶逻辑已被用作此类知识的表示形式。我们研究事件时间很重要的任务之间的知识转移。我们将此类任务称为[未提及的内容]。我们通过[未提及的内容]的概念具体化时间任务之间的相似性,并开发一种在不同但[未提及的内容]的时间任务之间的迁移学习方法。我们首先提出一种推理技术,从在两个任务的RL中收集的标记轨迹中提取[未提及的内容](MITL)公式。如果通过此推理确定逻辑可转移性,我们为两个任务中推断出的每个MITL公式构造一个定时自动机。我们在包括源任务定时自动机的位置和时钟估值的[未提及的内容]上执行RL。然后,我们在两个任务的定时自动机的相应组件(时钟、位置等)之间建立映射,并基于已建立的映射转移[未提及的内容]。最后,我们从转移的扩展Q函数开始,在目标任务的[未提及的内容]上执行RL。我们的实现结果表明,根据源任务和目标任务的相似程度,通过在扩展状态空间中执行RL,目标任务的采样效率可以提高多达一个数量级,并且使用转移的扩展Q函数可以进一步提高多达另一个数量级。

相似文献

3
5
Context-Based Meta-Reinforcement Learning With Bayesian Nonparametric Models.基于上下文的贝叶斯非参数模型元强化学习
IEEE Trans Pattern Anal Mach Intell. 2024 Oct;46(10):6948-6965. doi: 10.1109/TPAMI.2024.3386780. Epub 2024 Sep 5.
6
Exploration With Task Information for Meta Reinforcement Learning.基于任务信息的元强化学习探索
IEEE Trans Neural Netw Learn Syst. 2023 Aug;34(8):4033-4046. doi: 10.1109/TNNLS.2021.3121432. Epub 2023 Aug 4.
7
State-Temporal Compression in Reinforcement Learning With the Reward-Restricted Geodesic Metric.
IEEE Trans Pattern Anal Mach Intell. 2022 Sep;44(9):5572-5589. doi: 10.1109/TPAMI.2021.3069005. Epub 2022 Aug 4.
9
Curriculum-Based Asymmetric Multi-Task Reinforcement Learning.基于课程的非对称多任务强化学习。
IEEE Trans Pattern Anal Mach Intell. 2023 Jun;45(6):7258-7269. doi: 10.1109/TPAMI.2022.3223872. Epub 2023 May 5.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验