Department of Psychology , University of California, La Jolla, San Diego, CA, 92093-0109, USA.
Sci Rep. 2024 Feb 26;14(1):4661. doi: 10.1038/s41598-024-52726-9.
Two hypotheses have been advanced for when motor sequence learning occurs: offline between bouts of practice or online concurrently with practice. A third possibility is that learning occurs both online and offline. A complication for differentiating between those hypotheses is a process known as reactive inhibition, whereby performance worsens over consecutively executed sequences, but dissipates during breaks. We advance a new quantitative modeling framework that incorporates reactive inhibition and in which the three learning accounts can be implemented. Our results show that reactive inhibition plays a far larger role in performance than is appreciated in the literature. Across four groups of participants in which break times and correct sequences per trial were varied, the best overall fits were provided by a hybrid model. The version of the offline model that does not account for reactive inhibition, which is widely assumed in the literature, had the worst fits. We discuss implications for extant hypotheses and directions for future research.
在练习回合之间离线进行,或在练习过程中在线进行。第三种可能性是学习同时在线和离线发生。区分这些假设的一个复杂问题是一种称为反应性抑制的过程,在该过程中,表现随着连续执行的序列而恶化,但在休息期间消散。我们提出了一个新的定量建模框架,该框架包含反应性抑制,并且可以在其中实现三种学习解释。我们的结果表明,反应性抑制在表现中所起的作用远比文献中所认识到的要大。在四个不同的参与者组中,我们改变了休息时间和每次试验中的正确序列数量,混合模型提供了最佳的整体拟合。文献中广泛假设的、不考虑反应性抑制的离线模型版本的拟合效果最差。我们讨论了对现有假设的影响和未来研究的方向。