Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN 47405, USA.
Neural Comput. 2010 Jun;22(6):1511-27. doi: 10.1162/neco.2010.08-09-1080.
Hyperbolic discounting of future outcomes is widely observed to underlie choice behavior in animals. Additionally, recent studies (Kobayashi & Schultz, 2008) have reported that hyperbolic discounting is observed even in neural systems underlying choice. However, the most prevalent models of temporal discounting, such as temporal difference learning, assume that future outcomes are discounted exponentially. Exponential discounting has been preferred largely because it can be expressed recursively, whereas hyperbolic discounting has heretofore been thought not to have a recursive definition. In this letter, we define a learning algorithm, hyperbolically discounted temporal difference (HDTD) learning, which constitutes a recursive formulation of the hyperbolic model.
未来结果的双曲线折扣被广泛认为是动物选择行为的基础。此外,最近的研究(Kobayashi 和 Schultz,2008)报告说,即使在选择的神经基础中也观察到了双曲线折扣。然而,最流行的时间折扣模型,如时间差分学习,假设未来结果按指数折扣。指数折扣之所以被优先选择,主要是因为它可以递归地表示,而双曲线折扣迄今为止被认为没有递归定义。在这封信中,我们定义了一种学习算法,即双曲线折扣时间差分(HDTD)学习,它构成了双曲线模型的递归公式。