Princeton Neuroscience Institute and Department of Psychology, Princeton University, United States.
Curr Opin Neurobiol. 2012 Dec;22(6):956-62. doi: 10.1016/j.conb.2012.05.008. Epub 2012 Jun 11.
The hierarchical structure of human and animal behavior has been of critical interest in neuroscience for many years. Yet understanding the neural processes that give rise to such structure remains an open challenge. In recent research, a new perspective on hierarchical behavior has begun to take shape, inspired by ideas from machine learning, and in particular the framework of hierarchical reinforcement learning. Hierarchical reinforcement learning builds on traditional reinforcement learning mechanisms, extending them to accommodate temporally extended behaviors or subroutines. The resulting computational paradigm has begun to influence both theoretical and empirical work in neuroscience, conceptually aligning the study of hierarchical behavior with research on other aspects of learning and decision making, and giving rise to some thought-provoking new findings.
多年来,人类和动物行为的层次结构一直是神经科学关注的焦点。然而,理解产生这种结构的神经过程仍然是一个悬而未决的挑战。在最近的研究中,一种新的层次行为视角开始形成,其灵感来自机器学习的思想,特别是层次强化学习的框架。层次强化学习建立在传统强化学习机制的基础上,将其扩展到可适应时间扩展行为或子程序。由此产生的计算范例开始影响神经科学中的理论和实证工作,将层次行为的研究与学习和决策制定的其他方面的研究联系起来,并产生了一些发人深省的新发现。