Worthy Darrell A, Maddox W Todd
Texas A&M University.
The University of Texas at Austin.
J Math Psychol. 2014 Apr 1;59:41-49. doi: 10.1016/j.jmp.2013.10.001.
W.K. Estes often championed an approach to model development whereby an existing model was augmented by the addition of one or more free parameters, and a comparison between the simple and more complex, augmented model determined whether the additions were justified. Following this same approach we utilized Estes' (1950) own augmented learning equations to improve the fit and plausibility of a win-stay-lose-shift (WSLS) model that we have used in much of our recent work. Estes also championed models that assumed a comparison between multiple concurrent cognitive processes. In line with this, we develop a WSLS-Reinforcement Learning (RL) model that assumes that the output of a WSLS process that provides a probability of staying or switching to a different option based on the last two decision outcomes is compared with the output of an RL process that determines a probability of selecting each option based on a comparison of the expected value of each option. Fits to data from three different decision-making experiments suggest that the augmentations to the WSLS and RL models lead to a better account of decision-making behavior. Our results also support the assertion that human participants weigh the output of WSLS and RL processes during decision-making.
W.K. 埃斯蒂斯经常倡导一种模型开发方法,即通过添加一个或多个自由参数来增强现有模型,然后比较简单模型和更复杂的增强模型,以确定添加参数是否合理。遵循同样的方法,我们利用埃斯蒂斯(1950年)自己的增强学习方程来提高我们在近期许多工作中使用的赢留输变(WSLS)模型的拟合度和合理性。埃斯蒂斯还支持那些假设多个并发认知过程之间存在比较的模型。与此一致,我们开发了一个WSLS强化学习(RL)模型,该模型假设基于最后两个决策结果提供停留或切换到不同选项概率的WSLS过程的输出,与基于每个选项预期值比较来确定选择每个选项概率的RL过程的输出进行比较。对来自三个不同决策实验数据的拟合表明,对WSLS和RL模型的增强能更好地解释决策行为。我们的结果也支持这样一种观点,即人类参与者在决策过程中会权衡WSLS和RL过程的输出。