Xu Zhe, Topcu Ufuk
University of Texas at Austin.
IJCAI (U S). 2019;28:4010-4018. doi: 10.24963/ijcai.2019/557.
Transferring high-level knowledge from a to a is an effective way to expedite reinforcement learning (RL). For example, propositional logic and first-order logic have been used as representations of such knowledge. We study the transfer of knowledge between tasks in which the timing of the events matters. We call such tasks . We concretize similarity between temporal tasks through a notion of , and develop a transfer learning approach between different yet temporal tasks. We first propose an inference technique to extract (MITL) formulas in from labeled trajectories collected in RL of the two tasks. If logical transferability is identified through this inference, we construct a timed automaton for each of the inferred MITL formulas from both tasks. We perform RL on the which includes the locations and clock valuations of the timed automata for the source task. We then establish mappings between the corresponding components (clocks, locations, etc.) of the timed automata from the two tasks, and transfer the based on the established mappings. Finally, we perform RL on the for the target task, starting with the transferred extended Q-functions. Our implementation results show, depending on how similar the source task and the target task are, that the sampling efficiency for the target task can be improved by up to one order of magnitude by performing RL in the extended state space, and further improved by up to another order of magnitude using the transferred extended Q-functions.
将高级知识从一个[未提及的内容]转移到另一个[未提及的内容]是加速强化学习(RL)的有效方法。例如,命题逻辑和一阶逻辑已被用作此类知识的表示形式。我们研究事件时间很重要的任务之间的知识转移。我们将此类任务称为[未提及的内容]。我们通过[未提及的内容]的概念具体化时间任务之间的相似性,并开发一种在不同但[未提及的内容]的时间任务之间的迁移学习方法。我们首先提出一种推理技术,从在两个任务的RL中收集的标记轨迹中提取[未提及的内容](MITL)公式。如果通过此推理确定逻辑可转移性,我们为两个任务中推断出的每个MITL公式构造一个定时自动机。我们在包括源任务定时自动机的位置和时钟估值的[未提及的内容]上执行RL。然后,我们在两个任务的定时自动机的相应组件(时钟、位置等)之间建立映射,并基于已建立的映射转移[未提及的内容]。最后,我们从转移的扩展Q函数开始,在目标任务的[未提及的内容]上执行RL。我们的实现结果表明,根据源任务和目标任务的相似程度,通过在扩展状态空间中执行RL,目标任务的采样效率可以提高多达一个数量级,并且使用转移的扩展Q函数可以进一步提高多达另一个数量级。