Suppr超能文献

再测试学习参数的可靠性。

Test-retest reliability of reinforcement learning parameters.

机构信息

Department of Psychology, University of Amsterdam, Amsterdam, the Netherlands.

Cognitive Neuroscience Department, Radboud University Medical Centre, Nijmegen, the Netherlands.

出版信息

Behav Res Methods. 2024 Aug;56(5):4582-4599. doi: 10.3758/s13428-023-02203-4. Epub 2023 Sep 8.

Abstract

It has recently been suggested that parameter estimates of computational models can be used to understand individual differences at the process level. One area of research in which this approach, called computational phenotyping, has taken hold is computational psychiatry. One requirement for successful computational phenotyping is that behavior and parameters are stable over time. Surprisingly, the test-retest reliability of behavior and model parameters remains unknown for most experimental tasks and models. The present study seeks to close this gap by investigating the test-retest reliability of canonical reinforcement learning models in the context of two often-used learning paradigms: a two-armed bandit and a reversal learning task. We tested independent cohorts for the two tasks (N = 69 and N = 47) via an online testing platform with a between-test interval of five weeks. Whereas reliability was high for personality and cognitive measures (with ICCs ranging from .67 to .93), it was generally poor for the parameter estimates of the reinforcement learning models (with ICCs ranging from .02 to .52 for the bandit task and from .01 to .71 for the reversal learning task). Given that simulations indicated that our procedures could detect high test-retest reliability, this suggests that a significant proportion of the variability must be ascribed to the participants themselves. In support of that hypothesis, we show that mood (stress and happiness) can partly explain within-participant variability. Taken together, these results are critical for current practices in computational phenotyping and suggest that individual variability should be taken into account in the future development of the field.

摘要

最近有人提出,计算模型的参数估计可用于了解个体差异的过程水平。这种方法,即计算表型,在计算精神病学领域得到了广泛应用。成功进行计算表型分析的一个要求是,行为和参数在时间上是稳定的。令人惊讶的是,对于大多数实验任务和模型,行为和模型参数的重测信度仍然未知。本研究旨在通过在两个常用的学习范式(双臂赌博和反转学习任务)中研究经典强化学习模型的重测信度来弥补这一空白。我们通过一个在线测试平台,对两个任务的独立队列进行了测试(N=69 和 N=47),两次测试之间的间隔为五周。虽然人格和认知测量的可靠性较高(ICC 范围从.67 到.93),但强化学习模型的参数估计的可靠性通常较差(对于赌博任务,ICC 范围从.02 到.52,对于反转学习任务,ICC 范围从.01 到.71)。鉴于模拟表明我们的程序可以检测到高重测信度,这表明很大一部分变异性必须归因于参与者本身。为了支持这一假设,我们表明情绪(压力和幸福感)可以部分解释个体内的变异性。总而言之,这些结果对于当前的计算表型分析实践至关重要,并表明在该领域的未来发展中应考虑个体变异性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/563b/11289054/b53fca0d7933/13428_2023_2203_Fig1_HTML.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验