• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

运用有监督机器学习和可解释人工智能预测和理解熟练联合动作中的人类动作决策。

Predicting and understanding human action decisions during skillful joint-action using supervised machine learning and explainable-AI.

机构信息

School of Psychological Sciences, Faculty of Medicine, Health and Human Sciences, Macquarie University, Sydney, NSW, Australia.

Department of Engineering Mathematics, University of Bristol, Bristol, UK.

出版信息

Sci Rep. 2023 Mar 27;13(1):4992. doi: 10.1038/s41598-023-31807-1.

DOI:10.1038/s41598-023-31807-1
PMID:36973473
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10042997/
Abstract

This study investigated the utility of supervised machine learning (SML) and explainable artificial intelligence (AI) techniques for modeling and understanding human decision-making during multiagent task performance. Long short-term memory (LSTM) networks were trained to predict the target selection decisions of expert and novice players completing a multiagent herding task. The results revealed that the trained LSTM models could not only accurately predict the target selection decisions of expert and novice players but that these predictions could be made at timescales that preceded a player's conscious intent. Importantly, the models were also expertise specific, in that models trained to predict the target selection decisions of experts could not accurately predict the target selection decisions of novices (and vice versa). To understand what differentiated expert and novice target selection decisions, we employed the explainable-AI technique, SHapley Additive explanation (SHAP), to identify what informational features (variables) most influenced modelpredictions. The SHAP analysis revealed that experts were more reliant on information about target direction of heading and the location of coherders (i.e., other players) compared to novices. The implications and assumptions underlying the use of SML and explainable-AI techniques for investigating and understanding human decision-making are discussed.

摘要

本研究调查了监督机器学习 (SML) 和可解释人工智能 (AI) 技术在建模和理解多代理任务性能期间人类决策的应用。长短期记忆 (LSTM) 网络被训练用于预测完成多代理放牧任务的专家和新手玩家的目标选择决策。结果表明,经过训练的 LSTM 模型不仅可以准确预测专家和新手玩家的目标选择决策,而且可以在玩家有意识的意图之前的时间尺度上做出这些预测。重要的是,这些模型也是特定于专业知识的,即,用于预测专家目标选择决策的模型不能准确预测新手的目标选择决策(反之亦然)。为了了解是什么区分了专家和新手的目标选择决策,我们采用了可解释人工智能技术,即 Shapley 加法解释 (SHAP),来确定哪些信息特征(变量)对模型预测的影响最大。SHAP 分析表明,与新手相比,专家更依赖于关于目标行进方向和一致性者(即其他玩家)位置的信息。讨论了使用 SML 和可解释 AI 技术进行人类决策研究和理解的意义和假设。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5758/10042997/5156efd57193/41598_2023_31807_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5758/10042997/9c1fa7b72d47/41598_2023_31807_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5758/10042997/cdc5e7d7f053/41598_2023_31807_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5758/10042997/e522f69711a7/41598_2023_31807_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5758/10042997/d63e35823a24/41598_2023_31807_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5758/10042997/bdf47cee1ce3/41598_2023_31807_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5758/10042997/5156efd57193/41598_2023_31807_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5758/10042997/9c1fa7b72d47/41598_2023_31807_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5758/10042997/cdc5e7d7f053/41598_2023_31807_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5758/10042997/e522f69711a7/41598_2023_31807_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5758/10042997/d63e35823a24/41598_2023_31807_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5758/10042997/bdf47cee1ce3/41598_2023_31807_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5758/10042997/5156efd57193/41598_2023_31807_Fig6_HTML.jpg

相似文献

1
Predicting and understanding human action decisions during skillful joint-action using supervised machine learning and explainable-AI.运用有监督机器学习和可解释人工智能预测和理解熟练联合动作中的人类动作决策。
Sci Rep. 2023 Mar 27;13(1):4992. doi: 10.1038/s41598-023-31807-1.
2
An Explainable Artificial Intelligence Software Tool for Weight Management Experts (PRIMO): Mixed Methods Study.用于体重管理专家的可解释人工智能软件工具(PRIMO):混合方法研究。
J Med Internet Res. 2023 Sep 6;25:e42047. doi: 10.2196/42047.
3
CVD22: Explainable artificial intelligence determination of the relationship of troponin to D-Dimer, mortality, and CK-MB in COVID-19 patients.CVD22:探讨肌钙蛋白与 D-二聚体、死亡率和 COVID-19 患者 CK-MB 之间关系的可解释人工智能。
Comput Methods Programs Biomed. 2023 May;233:107492. doi: 10.1016/j.cmpb.2023.107492. Epub 2023 Mar 18.
4
Unboxing Deep Learning Model of Food Delivery Service Reviews Using Explainable Artificial Intelligence (XAI) Technique.使用可解释人工智能(XAI)技术剖析食品配送服务评论的深度学习模型
Foods. 2022 Jul 8;11(14):2019. doi: 10.3390/foods11142019.
5
Predicting Bulk Average Velocity with Rigid Vegetation in Open Channels Using Tree-Based Machine Learning: A Novel Approach Using Explainable Artificial Intelligence.使用基于树的机器学习预测明渠中刚性植被的总体平均速度:一种使用可解释人工智能的新方法。
Sensors (Basel). 2022 Jun 10;22(12):4398. doi: 10.3390/s22124398.
6
An explainable predictive model for suicide attempt risk using an ensemble learning and Shapley Additive Explanations (SHAP) approach.一种使用集成学习和夏普利值加法解释(SHAP)方法的自杀未遂风险可解释预测模型。
Asian J Psychiatr. 2023 Jan;79:103316. doi: 10.1016/j.ajp.2022.103316. Epub 2022 Nov 7.
7
Interpretation of ensemble learning to predict water quality using explainable artificial intelligence.使用可解释人工智能对集成学习预测水质进行解读。
Sci Total Environ. 2022 Aug 1;832:155070. doi: 10.1016/j.scitotenv.2022.155070. Epub 2022 Apr 6.
8
Explainable Machine Learning Model for Predicting First-Time Acute Exacerbation in Patients with Chronic Obstructive Pulmonary Disease.用于预测慢性阻塞性肺疾病患者首次急性加重的可解释机器学习模型
J Pers Med. 2022 Feb 7;12(2):228. doi: 10.3390/jpm12020228.
9
Evaluating the clinical utility of artificial intelligence assistance and its explanation on the glioma grading task.评估人工智能辅助在胶质瘤分级任务中的临床实用性及其解释。
Artif Intell Med. 2024 Feb;148:102751. doi: 10.1016/j.artmed.2023.102751. Epub 2024 Jan 2.
10
ExAID: A multimodal explanation framework for computer-aided diagnosis of skin lesions.EXAID:一种用于皮肤损伤计算机辅助诊断的多模态解释框架。
Comput Methods Programs Biomed. 2022 Mar;215:106620. doi: 10.1016/j.cmpb.2022.106620. Epub 2022 Jan 5.

引用本文的文献

1
Modelling human navigation and decision dynamics in a first-person herding task.在第一人称群体行为任务中对人类导航与决策动态进行建模。
R Soc Open Sci. 2024 Oct 30;11(10):231919. doi: 10.1098/rsos.231919. eCollection 2024 Oct.

本文引用的文献

1
Digital Transformation in Smart Farm and Forest Operations Needs Human-Centered AI: Challenges and Future Directions.智慧农场和森林作业中的数字化转型需要以人为中心的人工智能:挑战与未来方向。
Sensors (Basel). 2022 Apr 15;22(8):3043. doi: 10.3390/s22083043.
2
Task dynamics define the contextual emergence of human corralling behaviors.任务动态定义了人类圈养行为的上下文出现。
PLoS One. 2021 Nov 15;16(11):e0260046. doi: 10.1371/journal.pone.0260046. eCollection 2021.
3
Action Anticipation Using Pairwise Human-Object Interactions and Transformers.
利用成对的人与物体交互和Transformer进行动作预测
IEEE Trans Image Process. 2021;30:8116-8129. doi: 10.1109/TIP.2021.3113114. Epub 2021 Sep 27.
4
Promises and challenges of human computational ethology.人类计算行为学的前景与挑战。
Neuron. 2021 Jul 21;109(14):2224-2238. doi: 10.1016/j.neuron.2021.05.021. Epub 2021 Jun 17.
5
From Local Explanations to Global Understanding with Explainable AI for Trees.利用可解释人工智能实现从局部解释到树木的全局理解
Nat Mach Intell. 2020 Jan;2(1):56-67. doi: 10.1038/s42256-019-0138-9. Epub 2020 Jan 17.
6
Time series forecasting of COVID-19 transmission in Canada using LSTM networks.使用长短期记忆网络对加拿大新冠病毒传播进行时间序列预测。
Chaos Solitons Fractals. 2020 Jun;135:109864. doi: 10.1016/j.chaos.2020.109864. Epub 2020 May 8.
7
Machine Learning Analysis for Quantitative Discrimination of Dried Blood Droplets.机器学习分析用于定量鉴别干血斑。
Sci Rep. 2020 Feb 24;10(1):3313. doi: 10.1038/s41598-020-59847-x.
8
Toward safer highways, application of XGBoost and SHAP for real-time accident detection and feature analysis.为了更安全的高速公路,应用 XGBoost 和 SHAP 进行实时事故检测和特征分析。
Accid Anal Prev. 2020 Mar;136:105405. doi: 10.1016/j.aap.2019.105405. Epub 2019 Dec 20.
9
Explainable machine-learning predictions for the prevention of hypoxaemia during surgery.用于预防手术期间低氧血症的可解释机器学习预测。
Nat Biomed Eng. 2018 Oct;2(10):749-760. doi: 10.1038/s41551-018-0304-0. Epub 2018 Oct 10.
10
Human social motor solutions for human-machine interaction in dynamical task contexts.人类社会运动解决方案,用于动态任务环境中的人机交互。
Proc Natl Acad Sci U S A. 2019 Jan 22;116(4):1437-1446. doi: 10.1073/pnas.1813164116. Epub 2019 Jan 7.