• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

机器学习解释对用户信任度的影响,用于 COVID-19 的自动化诊断。

The effect of machine learning explanations on user trust for automated diagnosis of COVID-19.

机构信息

School of Information Systems, Queensland University of Technology, Australia.

IBM Research AI, Bangalore, India.

出版信息

Comput Biol Med. 2022 Jul;146:105587. doi: 10.1016/j.compbiomed.2022.105587. Epub 2022 May 8.

DOI:10.1016/j.compbiomed.2022.105587
PMID:35551007
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9080676/
Abstract

Recent years have seen deep neural networks (DNN) gain widespread acceptance for a range of computer vision tasks that include medical imaging. Motivated by their performance, multiple studies have focused on designing deep convolutional neural network architectures tailored to detect COVID-19 cases from chest computerized tomography (CT) images. However, a fundamental challenge of DNN models is their inability to explain the reasoning for a diagnosis. Explainability is essential for medical diagnosis, where understanding the reason for a decision is as important as the decision itself. A variety of algorithms have been proposed that generate explanations and strive to enhance users' trust in DNN models. Yet, the influence of the generated machine learning explanations on clinicians' trust for complex decision tasks in healthcare has not been understood. This study evaluates the quality of explanations generated for a deep learning model that detects COVID-19 based on CT images and examines the influence of the quality of these explanations on clinicians' trust. First, we collect radiologist-annotated explanations of the CT images for the diagnosis of COVID-19 to create the ground truth. We then compare ground truth explanations with machine learning explanations. Our evaluation shows that the explanations produced. by different algorithms were often correct (high precision) when compared to the radiologist annotated ground truth but a significant number of explanations were missed (significantly lower recall). We further conduct a controlled experiment to study the influence of machine learning explanations on clinicians' trust for the diagnosis of COVID-19. Our findings show that while the clinicians' trust in automated diagnosis increases with the explanations, their reliance on the diagnosis reduces as clinicians are less likely to rely on algorithms that are not close to human judgement. Clinicians want higher recall of the explanations for a better understanding of an automated diagnosis system.

摘要

近年来,深度学习网络(DNN)在包括医学成像在内的一系列计算机视觉任务中得到了广泛的认可。受其性能的启发,多项研究集中在设计专门用于从胸部计算机断层扫描(CT)图像中检测 COVID-19 病例的深度卷积神经网络架构上。然而,DNN 模型的一个基本挑战是它们无法解释诊断的推理。可解释性对于医学诊断至关重要,在医学诊断中,理解决策的原因与决策本身同样重要。已经提出了各种算法来生成解释并努力增强用户对 DNN 模型的信任。然而,生成的机器学习解释对临床医生在医疗保健中复杂决策任务的信任的影响尚不清楚。本研究评估了基于 CT 图像检测 COVID-19 的深度学习模型生成的解释的质量,并研究了这些解释的质量对临床医生信任的影响。首先,我们收集放射科医生对 COVID-19 诊断的 CT 图像的注释解释,以创建地面真实。然后,我们将地面真实解释与机器学习解释进行比较。我们的评估表明,与放射科医生注释的地面真实相比,不同算法生成的解释通常是正确的(高精度),但有大量解释被遗漏(召回率明显较低)。我们进一步进行了一项对照实验,以研究机器学习解释对临床医生对 COVID-19 诊断的信任的影响。我们的研究结果表明,尽管临床医生对自动化诊断的信任随着解释的增加而增加,但随着临床医生不太可能依赖与人类判断不接近的算法,他们对诊断的依赖程度降低。临床医生希望更好地理解自动化诊断系统,解释的召回率更高。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5aee/9080676/8a38e4109491/gr9_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5aee/9080676/435c21889687/gr1_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5aee/9080676/6b38dc4127ff/gr2_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5aee/9080676/db28f6411aaa/gr3_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5aee/9080676/cfcc883a68e8/gr4_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5aee/9080676/db9159a8e1a7/gr5_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5aee/9080676/ba90e635aeee/gr6_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5aee/9080676/c0855aafdb45/gr7_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5aee/9080676/c5585f2b7e5c/gr8_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5aee/9080676/8a38e4109491/gr9_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5aee/9080676/435c21889687/gr1_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5aee/9080676/6b38dc4127ff/gr2_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5aee/9080676/db28f6411aaa/gr3_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5aee/9080676/cfcc883a68e8/gr4_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5aee/9080676/db9159a8e1a7/gr5_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5aee/9080676/ba90e635aeee/gr6_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5aee/9080676/c0855aafdb45/gr7_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5aee/9080676/c5585f2b7e5c/gr8_lrg.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5aee/9080676/8a38e4109491/gr9_lrg.jpg

相似文献

1
The effect of machine learning explanations on user trust for automated diagnosis of COVID-19.机器学习解释对用户信任度的影响,用于 COVID-19 的自动化诊断。
Comput Biol Med. 2022 Jul;146:105587. doi: 10.1016/j.compbiomed.2022.105587. Epub 2022 May 8.
2
Interactive framework for Covid-19 detection and segmentation with feedback facility for dynamically improved accuracy and trust.具有反馈功能的 Covid-19 检测和分割交互框架,可动态提高准确性和可信度。
PLoS One. 2022 Dec 22;17(12):e0278487. doi: 10.1371/journal.pone.0278487. eCollection 2022.
3
Developing a Deep Neural Network model for COVID-19 diagnosis based on CT scan images.基于 CT 扫描图像的 COVID-19 诊断深度学习神经网络模型的建立。
Math Biosci Eng. 2023 Aug 14;20(9):16236-16258. doi: 10.3934/mbe.2023725.
4
Deep learning: definition and perspectives for thoracic imaging.深度学习:胸部影像学的定义和展望。
Eur Radiol. 2020 Apr;30(4):2021-2030. doi: 10.1007/s00330-019-06564-3. Epub 2019 Dec 6.
5
Tuning hyperparameters of machine learning algorithms and deep neural networks using metaheuristics: A bioinformatics study on biomedical and biological cases.使用元启发式算法调整机器学习算法和深度神经网络的超参数:生物信息学在生物医学和生物学案例中的研究。
Comput Biol Chem. 2022 Apr;97:107619. doi: 10.1016/j.compbiolchem.2021.107619. Epub 2021 Dec 24.
6
Comparing deep neural network and other machine learning algorithms for stroke prediction in a large-scale population-based electronic medical claims database.在一个基于大规模人群的电子医疗理赔数据库中,比较深度神经网络和其他机器学习算法用于中风预测的情况。
Annu Int Conf IEEE Eng Med Biol Soc. 2017 Jul;2017:3110-3113. doi: 10.1109/EMBC.2017.8037515.
7
EDNC: Ensemble Deep Neural Network for COVID-19 Recognition.EDNC:用于新冠肺炎识别的集成深度神经网络
Tomography. 2022 Mar 21;8(2):869-890. doi: 10.3390/tomography8020071.
8
Assessment of Automated Identification of Phases in Videos of Cataract Surgery Using Machine Learning and Deep Learning Techniques.使用机器学习和深度学习技术评估白内障手术视频中的相位自动识别。
JAMA Netw Open. 2019 Apr 5;2(4):e191860. doi: 10.1001/jamanetworkopen.2019.1860.
9
Chest X-ray image phase features for improved diagnosis of COVID-19 using convolutional neural network.基于卷积神经网络的胸部 X 射线图像相位特征提高 COVID-19 诊断性能
Int J Comput Assist Radiol Surg. 2021 Feb;16(2):197-206. doi: 10.1007/s11548-020-02305-w. Epub 2021 Jan 9.
10
Explainability does not improve biochemistry staff trust in artificial intelligence-based decision support.可解释性并未提高生物化学人员对基于人工智能的决策支持的信任。
Ann Clin Biochem. 2022 Nov;59(6):447-449. doi: 10.1177/00045632221128687. Epub 2022 Sep 22.

引用本文的文献

1
Quality of interaction between clinicians and artificial intelligence systems. A systematic review.临床医生与人工智能系统之间的交互质量。一项系统评价。
Future Healthc J. 2024 Aug 17;11(3):100172. doi: 10.1016/j.fhj.2024.100172. eCollection 2024 Sep.
2
Unveiling the black box: A systematic review of Explainable Artificial Intelligence in medical image analysis.揭开黑箱:医学图像分析中可解释人工智能的系统综述。
Comput Struct Biotechnol J. 2024 Aug 12;24:542-560. doi: 10.1016/j.csbj.2024.08.005. eCollection 2024 Dec.
3
Machine learning and deep learning-based approach in smart healthcare: Recent advances, applications, challenges and opportunities.

本文引用的文献

1
Explainable multiple abnormality classification of chest CT volumes.胸部CT容积数据的可解释性多异常分类
Artif Intell Med. 2022 Oct;132:102372. doi: 10.1016/j.artmed.2022.102372. Epub 2022 Aug 12.
2
Explainable Deep Learning Models in Medical Image Analysis.医学图像分析中的可解释深度学习模型
J Imaging. 2020 Jun 20;6(6):52. doi: 10.3390/jimaging6060052.
3
Human Evaluation of Models Built for Interpretability.针对可解释性构建的模型的人工评估。
智能医疗保健中基于机器学习和深度学习的方法:最新进展、应用、挑战与机遇。
AIMS Public Health. 2024 Jan 5;11(1):58-109. doi: 10.3934/publichealth.2024004. eCollection 2024.
4
Efficient management of pulmonary embolism diagnosis using a two-step interconnected machine learning model based on electronic health records data.基于电子健康记录数据,使用两步互联机器学习模型对肺栓塞诊断进行高效管理。
Health Inf Sci Syst. 2024 Mar 6;12(1):17. doi: 10.1007/s13755-024-00276-9. eCollection 2024 Dec.
5
Development and validation of a machine learning model for differential diagnosis of malignant pleural effusion using routine laboratory data.基于常规实验室数据的机器学习模型对恶性胸腔积液进行鉴别诊断的建立与验证。
Ther Adv Respir Dis. 2023 Jan-Dec;17:17534666231208632. doi: 10.1177/17534666231208632.
6
Advanced slime mould algorithm incorporating differential evolution and Powell mechanism for engineering design.结合差分进化和鲍威尔机制的先进黏液霉菌算法用于工程设计
iScience. 2023 Aug 28;26(10):107736. doi: 10.1016/j.isci.2023.107736. eCollection 2023 Oct 20.
7
An enhanced decision-making framework for predicting future trends of sharing economy.增强型决策框架,用于预测共享经济未来趋势。
PLoS One. 2023 Oct 5;18(10):e0291626. doi: 10.1371/journal.pone.0291626. eCollection 2023.
8
Comparing machine learning algorithms to predict COVID‑19 mortality using a dataset including chest computed tomography severity score data.比较机器学习算法,使用包含胸部计算机断层扫描严重程度评分数据的数据集来预测 COVID-19 死亡率。
Sci Rep. 2023 Jul 13;13(1):11343. doi: 10.1038/s41598-023-38133-6.
Proc AAAI Conf Hum Comput Crowdsourc. 2019;7(1):59-67. Epub 2019 Oct 28.
4
COVIDNet-CT: A Tailored Deep Convolutional Neural Network Design for Detection of COVID-19 Cases From Chest CT Images.COVIDNet-CT:一种用于从胸部CT图像中检测新冠肺炎病例的定制深度卷积神经网络设计。
Front Med (Lausanne). 2020 Dec 23;7:608525. doi: 10.3389/fmed.2020.608525. eCollection 2020.
5
COVID-Net: a tailored deep convolutional neural network design for detection of COVID-19 cases from chest X-ray images.COVID-Net:一种针对胸部 X 光图像中 COVID-19 病例检测的定制化深度卷积神经网络设计。
Sci Rep. 2020 Nov 11;10(1):19549. doi: 10.1038/s41598-020-76550-z.
6
Efficient and Effective Training of COVID-19 Classification Networks With Self-Supervised Dual-Track Learning to Rank.基于自监督双通道学习排序的 COVID-19 分类网络的高效有效训练。
IEEE J Biomed Health Inform. 2020 Oct;24(10):2787-2797. doi: 10.1109/JBHI.2020.3018181. Epub 2020 Aug 20.
7
Measuring the Quality of Explanations: The System Causability Scale (SCS): Comparing Human and Machine Explanations.衡量解释的质量:系统可归因性量表(SCS):比较人类和机器的解释
Kunstliche Intell (Oldenbourg). 2020;34(2):193-198. doi: 10.1007/s13218-020-00636-z. Epub 2020 Jan 21.
8
Clinically Applicable AI System for Accurate Diagnosis, Quantitative Measurements, and Prognosis of COVID-19 Pneumonia Using Computed Tomography.利用计算机断层扫描技术对 COVID-19 肺炎进行准确诊断、定量测量和预后的临床适用人工智能系统。
Cell. 2020 Jun 11;181(6):1423-1433.e11. doi: 10.1016/j.cell.2020.04.045. Epub 2020 May 4.
9
Sensitivity of Chest CT for COVID-19: Comparison to RT-PCR.胸部CT对新型冠状病毒肺炎的敏感性:与逆转录聚合酶链反应的比较。
Radiology. 2020 Aug;296(2):E115-E117. doi: 10.1148/radiol.2020200432. Epub 2020 Feb 19.
10
Interpretable Decision Sets: A Joint Framework for Description and Prediction.可解释决策集:用于描述与预测的联合框架
KDD. 2016 Aug;2016:1675-1684. doi: 10.1145/2939672.2939874.