• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

贝叶斯网络在医院住院管理中的应用:大流行期间可解释人工智能与黑箱人工智能的比较

Bayesian Networks in the Management of Hospital Admissions: A Comparison between Explainable AI and Black Box AI during the Pandemic.

作者信息

Nicora Giovanna, Catalano Michele, Bortolotto Chandra, Achilli Marina Francesca, Messana Gaia, Lo Tito Antonio, Consonni Alessio, Cutti Sara, Comotto Federico, Stella Giulia Maria, Corsico Angelo, Perlini Stefano, Bellazzi Riccardo, Bruno Raffaele, Preda Lorenzo

机构信息

Department of Electrical, Computer and Biomedical Engineering, University of Pavia, 27100 Pavia, Italy.

Diagnostic Imaging and Radiotherapy Unit, Department of Clinical, Surgical, Diagnostic and Pediatric Sciences, University of Pavia, 27100 Pavia, Italy.

出版信息

J Imaging. 2024 May 10;10(5):117. doi: 10.3390/jimaging10050117.

DOI:10.3390/jimaging10050117
PMID:38786571
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11122655/
Abstract

Artificial Intelligence (AI) and Machine Learning (ML) approaches that could learn from large data sources have been identified as useful tools to support clinicians in their decisional process; AI and ML implementations have had a rapid acceleration during the recent COVID-19 pandemic. However, many ML classifiers are "black box" to the final user, since their underlying reasoning process is often obscure. Additionally, the performance of such models suffers from poor generalization ability in the presence of dataset shifts. Here, we present a comparison between an explainable-by-design ("white box") model (Bayesian Network (BN)) versus a black box model (Random Forest), both studied with the aim of supporting clinicians of Policlinico San Matteo University Hospital in Pavia (Italy) during the triage of COVID-19 patients. Our aim is to evaluate whether the BN predictive performances are comparable with those of a widely used but less explainable ML model such as Random Forest and to test the generalization ability of the ML models across different waves of the pandemic.

摘要

能够从大量数据源中学习的人工智能(AI)和机器学习(ML)方法已被视为在临床医生决策过程中提供支持的有用工具;在最近的新冠疫情期间,AI和ML的应用迅速加速。然而,许多ML分类器对于最终用户来说是“黑匣子”,因为其底层推理过程往往晦涩难懂。此外,在数据集发生变化时,此类模型的性能会受到泛化能力差的影响。在此,我们对一个设计上可解释的(“白匣子”)模型(贝叶斯网络(BN))和一个黑匣子模型(随机森林)进行了比较,二者的研究目的均是在意大利帕维亚圣马泰奥大学医院对新冠患者进行分诊时为临床医生提供支持。我们的目的是评估BN的预测性能是否与广泛使用但较难解释的ML模型(如随机森林)相当,并测试ML模型在疫情不同阶段的泛化能力。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7939/11122655/def57341fb32/jimaging-10-00117-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7939/11122655/d8a301a86abb/jimaging-10-00117-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7939/11122655/a8901f54838a/jimaging-10-00117-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7939/11122655/def57341fb32/jimaging-10-00117-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7939/11122655/d8a301a86abb/jimaging-10-00117-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7939/11122655/a8901f54838a/jimaging-10-00117-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7939/11122655/def57341fb32/jimaging-10-00117-g003.jpg

相似文献

1
Bayesian Networks in the Management of Hospital Admissions: A Comparison between Explainable AI and Black Box AI during the Pandemic.贝叶斯网络在医院住院管理中的应用:大流行期间可解释人工智能与黑箱人工智能的比较
J Imaging. 2024 May 10;10(5):117. doi: 10.3390/jimaging10050117.
2
Explainable Machine Learning Model to Predict COVID-19 Severity Among Older Adults in the Province of Quebec.用于预测魁北克省老年人中 COVID-19 严重程度的可解释机器学习模型。
Ann Fam Med. 2023 Jan 1;21(21 Suppl 1):3619. doi: 10.1370/afm.21.s1.3619.
3
Explainable artificial intelligence in emergency medicine: an overview.急诊医学中的可解释人工智能:综述
Clin Exp Emerg Med. 2023 Dec;10(4):354-362. doi: 10.15441/ceem.23.145. Epub 2023 Nov 28.
4
Model-agnostic explainable artificial intelligence tools for severity prediction and symptom analysis on Indian COVID-19 data.用于印度新冠疫情数据严重程度预测和症状分析的模型无关可解释人工智能工具。
Front Artif Intell. 2023 Dec 4;6:1272506. doi: 10.3389/frai.2023.1272506. eCollection 2023.
5
Detection of COVID-19 in X-ray Images Using Densely Connected Squeeze Convolutional Neural Network (DCSCNN): Focusing on Interpretability and Explainability of the Black Box Model.基于密集连接挤压卷积神经网络(DCSCNN)的 X 射线图像中 COVID-19 的检测:聚焦黑箱模型的可解释性和可说明性。
Sensors (Basel). 2022 Dec 18;22(24):9983. doi: 10.3390/s22249983.
6
Improvement of a prediction model for heart failure survival through explainable artificial intelligence.通过可解释人工智能改进心力衰竭生存预测模型。
Front Cardiovasc Med. 2023 Aug 1;10:1219586. doi: 10.3389/fcvm.2023.1219586. eCollection 2023.
7
Causability and explainability of artificial intelligence in medicine.人工智能在医学中的可归因性与可解释性。
Wiley Interdiscip Rev Data Min Knowl Discov. 2019 Jul-Aug;9(4):e1312. doi: 10.1002/widm.1312. Epub 2019 Apr 2.
8
Explainable AI and machine learning: performance evaluation and explainability of classifiers on educational data mining inspired career counseling.可解释人工智能与机器学习:基于教育数据挖掘的职业咨询中分类器的性能评估与可解释性
Educ Inf Technol (Dordr). 2023;28(1):1081-1116. doi: 10.1007/s10639-022-11221-2. Epub 2022 Jul 16.
9
Explainable AI for Bioinformatics: Methods, Tools and Applications.可解释人工智能在生物信息学中的应用:方法、工具与应用。
Brief Bioinform. 2023 Sep 20;24(5). doi: 10.1093/bib/bbad236.
10
Explainable Artificial Intelligence for Human-Machine Interaction in Brain Tumor Localization.用于脑肿瘤定位中人机交互的可解释人工智能
J Pers Med. 2021 Nov 16;11(11):1213. doi: 10.3390/jpm11111213.

引用本文的文献

1
Which explanations do clinicians prefer? A comparative evaluation of XAI understandability and actionability in predicting the need for hospitalization.临床医生更喜欢哪些解释?对XAI在预测住院需求方面的可理解性和可操作性的比较评估。
BMC Med Inform Decis Mak. 2025 Jul 16;25(1):269. doi: 10.1186/s12911-025-03045-0.

本文引用的文献

1
Clinical prediction models and the multiverse of madness.临床预测模型与疯狂的多元宇宙。
BMC Med. 2023 Dec 18;21(1):502. doi: 10.1186/s12916-023-03212-y.
2
Performance of an AI algorithm during the different phases of the COVID pandemics: what can we learn from the AI and vice versa.人工智能算法在新冠疫情不同阶段的表现:我们能从人工智能中学到什么,反之亦然。
Eur J Radiol Open. 2023 Dec;11:100497. doi: 10.1016/j.ejro.2023.100497. Epub 2023 Jun 19.
3
Artificial Intelligence for Personalized Genetics and New Drug Development: Benefits and Cautions.
用于个性化遗传学和新药开发的人工智能:益处与注意事项。
Bioengineering (Basel). 2023 May 19;10(5):613. doi: 10.3390/bioengineering10050613.
4
Assessing the effects of data drift on the performance of machine learning models used in clinical sepsis prediction.评估数据漂移对临床脓毒症预测中使用的机器学习模型性能的影响。
Int J Med Inform. 2023 May;173:104930. doi: 10.1016/j.ijmedinf.2022.104930. Epub 2022 Nov 19.
5
Why did AI get this one wrong? - Tree-based explanations of machine learning model predictions.为什么 AI 会犯这个错误?——机器学习模型预测的基于树的解释。
Artif Intell Med. 2023 Jan;135:102471. doi: 10.1016/j.artmed.2022.102471. Epub 2022 Dec 1.
6
Predicting emerging SARS-CoV-2 variants of concern through a One Class dynamic anomaly detection algorithm.通过一类动态异常检测算法预测新兴的 SARS-CoV-2 关注变异株。
BMJ Health Care Inform. 2022 Dec;29(1). doi: 10.1136/bmjhci-2022-100643.
7
Investigating the understandability of XAI methods for enhanced user experience: When Bayesian network users became detectives.研究可理解性增强用户体验的 XAI 方法:当贝叶斯网络用户成为侦探时。
Artif Intell Med. 2022 Dec;134:102438. doi: 10.1016/j.artmed.2022.102438. Epub 2022 Nov 9.
8
Personalised Dosing Using the CURATE.AI Algorithm: Protocol for a Feasibility Study in Patients with Hypertension and Type II Diabetes Mellitus.使用 CURATE.AI 算法进行个体化剂量:高血压和 2 型糖尿病患者可行性研究方案。
Int J Environ Res Public Health. 2022 Jul 23;19(15):8979. doi: 10.3390/ijerph19158979.
9
Changes in laboratory value improvement and mortality rates over the course of the pandemic: an international retrospective cohort study of hospitalised patients infected with SARS-CoV-2.大流行期间实验室指标改善和死亡率的变化:一项国际回顾性队列研究,纳入了感染 SARS-CoV-2 的住院患者。
BMJ Open. 2022 Jun 23;12(6):e057725. doi: 10.1136/bmjopen-2021-057725.
10
Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead.停止为高风险决策解释黑箱机器学习模型,转而使用可解释模型。
Nat Mach Intell. 2019 May;1(5):206-215. doi: 10.1038/s42256-019-0048-x. Epub 2019 May 13.