Suppr超能文献

基于技术和医学视角的人工智能临床决策支持系统可解释性的系统评价。

Interpretability of Clinical Decision Support Systems Based on Artificial Intelligence from Technological and Medical Perspective: A Systematic Review.

机构信息

The Second Xiangya Hospital of Central South University, No. 139, Renmin Road Central, Changsha, Hunan, China.

School of Life Sciences, Central South University, Changsha, Hunan, China.

出版信息

J Healthc Eng. 2023 Feb 3;2023:9919269. doi: 10.1155/2023/9919269. eCollection 2023.

Abstract

BACKGROUND

Artificial intelligence (AI) has developed rapidly, and its application extends to clinical decision support system (CDSS) for improving healthcare quality. However, the interpretability of AI-driven CDSS poses significant challenges to widespread application.

OBJECTIVE

This study is a review of the knowledge-based and data-based CDSS literature regarding interpretability in health care. It highlights the relevance of interpretability for CDSS and the area for improvement from technological and medical perspectives.

METHODS

A systematic search was conducted on the interpretability-related literature published from 2011 to 2020 and indexed in the five databases: Web of Science, PubMed, ScienceDirect, Cochrane, and Scopus. Journal articles that focus on the interpretability of CDSS were included for analysis. Experienced researchers also participated in manually reviewing the selected articles for inclusion/exclusion and categorization.

RESULTS

Based on the inclusion and exclusion criteria, 20 articles from 16 journals were finally selected for this review. Interpretability, which means a transparent structure of the model, a clear relationship between input and output, and explainability of artificial intelligence algorithms, is essential for CDSS application in the healthcare setting. Methods for improving the interpretability of CDSS include ante-hoc methods such as fuzzy logic, decision rules, logistic regression, decision trees for knowledge-based AI, and white box models, post hoc methods such as feature importance, sensitivity analysis, visualization, and activation maximization for black box models. A number of factors, such as data type, biomarkers, human-AI interaction, needs of clinicians, and patients, can affect the interpretability of CDSS.

CONCLUSIONS

The review explores the meaning of the interpretability of CDSS and summarizes the current methods for improving interpretability from technological and medical perspectives. The results contribute to the understanding of the interpretability of CDSS based on AI in health care. Future studies should focus on establishing formalism for defining interpretability, identifying the properties of interpretability, and developing an appropriate and objective metric for interpretability; in addition, the user's demand for interpretability and how to express and provide explanations are also the directions for future research.

摘要

背景

人工智能(AI)发展迅速,其应用延伸至临床决策支持系统(CDSS),以提高医疗质量。然而,AI 驱动的 CDSS 的可解释性对其广泛应用构成了重大挑战。

目的

本研究综述了 2011 年至 2020 年期间在五个数据库(Web of Science、PubMed、ScienceDirect、Cochrane 和 Scopus)中发表的关于医疗保健中 CDSS 的可解释性的基于知识和基于数据的 CDSS 文献。重点介绍了 CDSS 的可解释性的相关性以及从技术和医学角度改进的领域。

方法

对 2011 年至 2020 年期间发表的、与可解释性相关的文献进行了系统检索,并将其纳入五个数据库:Web of Science、PubMed、ScienceDirect、Cochrane 和 Scopus。纳入的分析文章均为重点关注 CDSS 可解释性的期刊文章。经验丰富的研究人员还参与了对选定文章的纳入/排除和分类的手动审查。

结果

根据纳入和排除标准,最终从 16 种期刊中选择了 20 篇文章进行综述。可解释性是指模型的透明结构、输入和输出之间的清晰关系以及人工智能算法的可解释性,对于 CDSS 在医疗保健环境中的应用至关重要。提高 CDSS 可解释性的方法包括:基于知识的 AI 的模糊逻辑、决策规则、逻辑回归、决策树等前置方法;以及黑盒模型的后处理方法,如特征重要性、敏感性分析、可视化和激活最大化。许多因素,如数据类型、生物标志物、人机交互、临床医生和患者的需求等,都可能影响 CDSS 的可解释性。

结论

本综述探讨了 CDSS 的可解释性的含义,并从技术和医学角度总结了提高可解释性的当前方法。研究结果有助于理解医疗保健中基于 AI 的 CDSS 的可解释性。未来的研究应集中在为定义可解释性建立形式主义、确定可解释性的属性以及开发适当和客观的可解释性度量上;此外,用户对可解释性的需求以及如何表达和提供解释也是未来研究的方向。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cce3/9918364/4cfa9d21cef2/JHE2023-9919269.001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验