Gong Ziyi, Duarte Fabiola, Mooney Richard, Pearson John
Department of Neurobiology, Duke University, Durham, NC, USA.
Department of Cell Biology, Duke University, Durham, NC, USA.
bioRxiv. 2025 Aug 19:2025.07.18.665446. doi: 10.1101/2025.07.18.665446.
Reinforcement learning (RL) offers a compelling account of how agents learn complex behaviors by trial and error, yet RL is predicated on the existence of a reward function provided by the agent's environment. By contrast, many skills are learned without external guidance, posing a challenge to RL's ability to account for self-directed learning. For instance, juvenile male zebra finches first memorize and then train themselves to reproduce the song of an adult male tutor through extensive practice. This process is believed to be guided by an internally computed assessment of performance quality, though the mechanism and development of this signal remain unknown. Here, we propose that, contrary to prevailing assumptions, tutor song memorization and performance assessment are subserved by the same neural circuit, one trained to predictively cancel tutor song. To test this hypothesis, we built models of a local forebrain circuit that learns to use contextual input from premotor regions to cancel tutor song auditory input via plasticity at different synaptic loci. We found that, after learning, excitatory projection neurons in these circuits exhibited population error codes signaling mismatches between the tutor song memory and birds' own performance, and these signals best matched experimental data when networks were trained with anti-Hebbian plasticity in the recurrent pathway through inhibitory interneurons. We also found that model learning proceeds in two stages, with an initial phase of sharpening error sensitivity followed by a fine-tuning period in which error responses to the tutor song are minimized. Finally, we showed that the error signal produced by this model can train a simple RL agent to replicate the spectrograms of adult bird songs. Together, our results suggest that purely local learning via predictive cancellation suffices for bootstrapping error signals capable of guiding self-directed learning of natural behaviors.
强化学习(RL)为智能体如何通过试错学习复杂行为提供了一个令人信服的解释,然而强化学习基于智能体环境提供的奖励函数的存在。相比之下,许多技能是在没有外部指导的情况下习得的,这对强化学习解释自主学习的能力构成了挑战。例如,幼年雄性斑胸草雀首先记忆,然后通过大量练习训练自己重现成年雄性导师的歌声。尽管这种性能质量内部计算评估的机制和发展仍然未知,但这个过程被认为是由其指导的。在这里,我们提出,与普遍假设相反,导师歌声的记忆和性能评估由同一个神经回路支持,该回路经过训练以预测性地消除导师歌声。为了验证这一假设,我们构建了一个局部前脑回路模型,该模型通过不同突触位点的可塑性学习利用来自运动前区的上下文输入来消除导师歌声的听觉输入。我们发现,学习后,这些回路中的兴奋性投射神经元表现出群体误差编码,表明导师歌声记忆与鸟类自身表现之间的不匹配,当网络通过抑制性中间神经元在循环通路中进行反赫布可塑性训练时,这些信号与实验数据最匹配。我们还发现模型学习分两个阶段进行,初始阶段是提高误差敏感性,随后是一个微调期,在此期间对导师歌声的误差反应最小化。最后,我们表明该模型产生的误差信号可以训练一个简单的强化学习智能体来复制成年鸟类歌声的频谱图。总之,我们的结果表明,通过预测性消除进行的纯局部学习足以引导能够指导自然行为自主学习的误差信号。