Suppr超能文献

运用有监督机器学习和可解释人工智能预测和理解熟练联合动作中的人类动作决策。

Predicting and understanding human action decisions during skillful joint-action using supervised machine learning and explainable-AI.

机构信息

School of Psychological Sciences, Faculty of Medicine, Health and Human Sciences, Macquarie University, Sydney, NSW, Australia.

Department of Engineering Mathematics, University of Bristol, Bristol, UK.

出版信息

Sci Rep. 2023 Mar 27;13(1):4992. doi: 10.1038/s41598-023-31807-1.

Abstract

This study investigated the utility of supervised machine learning (SML) and explainable artificial intelligence (AI) techniques for modeling and understanding human decision-making during multiagent task performance. Long short-term memory (LSTM) networks were trained to predict the target selection decisions of expert and novice players completing a multiagent herding task. The results revealed that the trained LSTM models could not only accurately predict the target selection decisions of expert and novice players but that these predictions could be made at timescales that preceded a player's conscious intent. Importantly, the models were also expertise specific, in that models trained to predict the target selection decisions of experts could not accurately predict the target selection decisions of novices (and vice versa). To understand what differentiated expert and novice target selection decisions, we employed the explainable-AI technique, SHapley Additive explanation (SHAP), to identify what informational features (variables) most influenced modelpredictions. The SHAP analysis revealed that experts were more reliant on information about target direction of heading and the location of coherders (i.e., other players) compared to novices. The implications and assumptions underlying the use of SML and explainable-AI techniques for investigating and understanding human decision-making are discussed.

摘要

本研究调查了监督机器学习 (SML) 和可解释人工智能 (AI) 技术在建模和理解多代理任务性能期间人类决策的应用。长短期记忆 (LSTM) 网络被训练用于预测完成多代理放牧任务的专家和新手玩家的目标选择决策。结果表明,经过训练的 LSTM 模型不仅可以准确预测专家和新手玩家的目标选择决策,而且可以在玩家有意识的意图之前的时间尺度上做出这些预测。重要的是,这些模型也是特定于专业知识的,即,用于预测专家目标选择决策的模型不能准确预测新手的目标选择决策(反之亦然)。为了了解是什么区分了专家和新手的目标选择决策,我们采用了可解释人工智能技术,即 Shapley 加法解释 (SHAP),来确定哪些信息特征(变量)对模型预测的影响最大。SHAP 分析表明,与新手相比,专家更依赖于关于目标行进方向和一致性者(即其他玩家)位置的信息。讨论了使用 SML 和可解释 AI 技术进行人类决策研究和理解的意义和假设。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5758/10042997/9c1fa7b72d47/41598_2023_31807_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验