Department of Computer Science, University of Haifa, Haifa, Israel.
Department of Cognitive Sciences, University of Haifa, 3498838, Haifa, Israel.
Sci Rep. 2022 Mar 18;12(1):4736. doi: 10.1038/s41598-022-08863-0.
Deep neural networks (DNNs) models have the potential to provide new insights in the study of cognitive processes, such as human decision making, due to their high capacity and data-driven design. While these models may be able to go beyond theory-driven models in predicting human behaviour, their opaque nature limits their ability to explain how an operation is carried out, undermining their usefulness as a scientific tool. Here we suggest the use of a DNN model as an exploratory tool to identify predictable and consistent human behaviour, and using explicit, theory-driven models, to characterise the high-capacity model. To demonstrate our approach, we trained an exploratory DNN model to predict human decisions in a four-armed bandit task. We found that this model was more accurate than two explicit models, a reward-oriented model geared towards choosing the most rewarding option, and a reward-oblivious model that was trained to predict human decisions without information about rewards. Using experimental simulations, we were able to characterise the exploratory model using the explicit models. We found that the exploratory model converged with the reward-oriented model's predictions when one option was clearly better than the others, but that it predicted pattern-based explorations akin to the reward-oblivious model's predictions. These results suggest that predictable decision patterns that are not solely reward-oriented may contribute to human decisions. Importantly, we demonstrate how theory-driven cognitive models can be used to characterise the operation of DNNs, making DNNs a useful explanatory tool in scientific investigation.
深度神经网络 (DNN) 模型具有通过其高容量和数据驱动设计为研究认知过程(例如人类决策)提供新见解的潜力。虽然这些模型在预测人类行为方面可能能够超越理论驱动模型,但它们不透明的性质限制了它们解释操作如何执行的能力,从而削弱了它们作为科学工具的有用性。在这里,我们建议使用 DNN 模型作为探索性工具来识别可预测和一致的人类行为,并使用明确的、理论驱动的模型来描述高容量模型。为了演示我们的方法,我们训练了一个探索性 DNN 模型来预测四人博彩任务中的人类决策。我们发现,该模型比两个显式模型(一个面向奖励的模型,旨在选择最有回报的选项,以及一个奖励忽略的模型,旨在在没有奖励信息的情况下预测人类决策)更准确。通过实验模拟,我们能够使用显式模型来描述探索性模型。我们发现,当一个选项明显优于其他选项时,探索性模型与奖励导向模型的预测结果一致,但它预测了类似于奖励忽略模型预测结果的基于模式的探索。这些结果表明,可能并非仅基于奖励的可预测决策模式可能有助于人类决策。重要的是,我们展示了如何使用理论驱动的认知模型来描述 DNN 的操作,使 DNN 成为科学研究中有用的解释工具。