• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

提高医学人工智能的可解释性和可整合性以促进医疗保健专业人员的接受和使用:混合系统评价

Improving Explainability and Integrability of Medical AI to Promote Health Care Professional Acceptance and Use: Mixed Systematic Review.

作者信息

Liu Yushu, Liu Chenxi, Zheng Jianing, Xu Chang, Wang Dan

机构信息

School of Medicine and Health Management, Huazhong University of Science and Technology, Wuhan, China.

Major Disciplinary Platform under Double First-Class Initiative for Liberal Arts at Huazhong University of Science and Technology (Research Center for High-Quality Development of Hospitals), Wuhan, China.

出版信息

J Med Internet Res. 2025 Aug 7;27:e73374. doi: 10.2196/73374.

DOI:10.2196/73374
PMID:40773743
Abstract

BACKGROUND

The integration of artificial intelligence (AI) in health care has significant potential, yet its acceptance by health care professionals (HCPs) is essential for successful implementation. Understanding HCPs' perspectives on the explainability and integrability of medical AI is crucial, as these factors influence their willingness to adopt and effectively use such technologies.

OBJECTIVE

This study aims to improve the acceptance and use of medical AI. From a user perspective, it explores HCPs' understanding of the explainability and integrability of medical AI.

METHODS

We performed a mixed systematic review by conducting a comprehensive search in the PubMed, Web of Science, Scopus, IEEE Xplore, and ACM Digital Library and arXiv databases for studies published between 2014 and 2024. Studies concerning an explanation or the integrability of medical AI were included. Study quality was assessed using the Joanna Briggs Institute critical appraisal checklist and Mixed Methods Appraisal Tool, with only medium- or high-quality studies included. Qualitative data were analyzed via thematic analysis, while quantitative findings were synthesized narratively.

RESULTS

Out of 11,888 records initially retrieved, 22 (0.19%) studies met the inclusion criteria. All selected studies were published from 2020 onward, reflecting the recency and relevance of the topic. The majority (18/22, 82%) originated from high-income countries, and most (17/22, 77%) adopted qualitative methodologies, with the remainder (5/22, 23%) using quantitative or mixed method approaches. From the included studies, a conceptual framework was developed that delineates HCPs' perceptions of explainability and integrability. Regarding explainability, HCPs predominantly emphasized postprocessing explanations, particularly aspects of local explainability such as feature relevance and case-specific outputs. Visual tools that enhance the explainability of AI decisions (eg, heat maps and feature attribution) were frequently mentioned as important enablers of trust and acceptance. For integrability, key concerns included workflow adaptation, system compatibility with electronic health records, and overall ease of use. These aspects were consistently identified as primary conditions for real-world adoption.

CONCLUSIONS

To foster wider adoption of AI in clinical settings, future system designs must center on the needs of HCPs. Enhancing post hoc explainability and ensuring seamless integration into existing workflows are critical to building trust and promoting sustained use. The proposed conceptual framework can serve as a practical guide for developers, researchers, and policy makers in aligning AI solutions with frontline user expectations.

TRIAL REGISTRATION

PROSPERO CRD420250652253; https://www.crd.york.ac.uk/PROSPERO/view/CRD420250652253.

摘要

背景

人工智能(AI)在医疗保健领域的整合具有巨大潜力,然而医疗保健专业人员(HCPs)对其的接受对于成功实施至关重要。了解HCPs对医疗AI的可解释性和可整合性的看法至关重要,因为这些因素会影响他们采用和有效使用此类技术的意愿。

目的

本研究旨在提高医疗AI的接受度和使用率。从用户角度出发,探讨HCPs对医疗AI的可解释性和可整合性的理解。

方法

我们进行了一项混合系统评价,在PubMed、科学网、Scopus、IEEE Xplore、ACM数字图书馆和arXiv数据库中全面检索2014年至2024年发表的研究。纳入有关医疗AI的解释或可整合性的研究。使用乔安娜·布里格斯研究所的批判性评价清单和混合方法评价工具评估研究质量,仅纳入中高质量研究。定性数据通过主题分析进行分析,定量结果进行叙述性综合。

结果

在最初检索的11,888条记录中,22项(0.19%)研究符合纳入标准。所有选定研究均为2020年以后发表,反映了该主题的时效性和相关性。大多数(18/22,82%)来自高收入国家,大多数(17/22,77%)采用定性方法,其余(5/22,23%)采用定量或混合方法。从纳入的研究中,开发了一个概念框架,描绘了HCPs对可解释性和可整合性的看法。关于可解释性,HCPs主要强调后处理解释,特别是局部可解释性的方面,如特征相关性和特定案例输出。增强AI决策可解释性的视觉工具(如热图和特征归因)经常被提及是信任和接受的重要促成因素。对于可整合性,关键问题包括工作流程调整、系统与电子健康记录的兼容性以及总体易用性。这些方面一直被确定为实际应用的主要条件。

结论

为了促进AI在临床环境中的更广泛应用,未来的系统设计必须以HCPs的需求为中心。增强事后可解释性并确保无缝集成到现有工作流程中对于建立信任和促进持续使用至关重要。所提出的概念框架可为开发者、研究人员和政策制定者使AI解决方案符合一线用户期望提供实用指南。

试验注册

PROSPERO CRD420250652253;https://www.crd.york.ac.uk/PROSPERO/view/CRD420250652253

相似文献

1
Improving Explainability and Integrability of Medical AI to Promote Health Care Professional Acceptance and Use: Mixed Systematic Review.提高医学人工智能的可解释性和可整合性以促进医疗保健专业人员的接受和使用:混合系统评价
J Med Internet Res. 2025 Aug 7;27:e73374. doi: 10.2196/73374.
2
Perspectives of Health Care Professionals on the Use of AI to Support Clinical Decision-Making in the Management of Multiple Long-Term Conditions: Interview Study.医疗保健专业人员对使用人工智能支持多种慢性病管理中临床决策的看法:访谈研究
J Med Internet Res. 2025 Jul 4;27:e71980. doi: 10.2196/71980.
3
Prescription of Controlled Substances: Benefits and Risks管制药品的处方:益处与风险
4
Health professionals' experience of teamwork education in acute hospital settings: a systematic review of qualitative literature.医疗专业人员在急症医院环境中团队合作教育的经验:对定性文献的系统综述
JBI Database System Rev Implement Rep. 2016 Apr;14(4):96-137. doi: 10.11124/JBISRIR-2016-1843.
5
Understanding factors influencing the implementation of medicine risk communications by healthcare professionals in clinical practice: a systematic review using the Theoretical Domains Framework.理解影响医疗保健专业人员在临床实践中实施医学风险沟通的因素:使用理论领域框架进行的系统评价。
Res Social Adm Pharm. 2024 Feb;20(2):86-98. doi: 10.1016/j.sapharm.2023.10.004. Epub 2023 Oct 18.
6
Challenges and Opportunities for Data Sharing Related to Artificial Intelligence Tools in Health Care in Low- and Middle-Income Countries: Systematic Review and Case Study From Thailand.低收入和中等收入国家医疗保健领域与人工智能工具相关的数据共享面临的挑战与机遇:系统评价及来自泰国的案例研究
J Med Internet Res. 2025 Feb 4;27:e58338. doi: 10.2196/58338.
7
Implementing AI in Hospitals to Achieve a Learning Health System: Systematic Review of Current Enablers and Barriers.在医院中实施人工智能以实现学习型医疗体系:对当前推动因素和障碍的系统评价。
J Med Internet Res. 2024 Aug 2;26:e49655. doi: 10.2196/49655.
8
Perceptions of, Barriers to, and Facilitators of the Use of AI in Primary Care: Systematic Review of Qualitative Studies.基层医疗中人工智能应用的认知、障碍与促进因素:定性研究的系统评价
J Med Internet Res. 2025 Jun 25;27:e71186. doi: 10.2196/71186.
9
Stakeholder Perspectives of Clinical Artificial Intelligence Implementation: Systematic Review of Qualitative Evidence.利益相关者对临床人工智能实施的观点:定性证据的系统评价。
J Med Internet Res. 2023 Jan 10;25:e39742. doi: 10.2196/39742.
10
Health Care Professionals' Experiences and Opinions About Generative AI and Ambient Scribes in Clinical Documentation: Protocol for a Scoping Review.医疗保健专业人员对生成式人工智能和临床文档中的环境抄写员的经验与看法:一项范围综述的方案
JMIR Res Protoc. 2025 Aug 8;14:e73602. doi: 10.2196/73602.

本文引用的文献

1
Large Language Model Influence on Diagnostic Reasoning: A Randomized Clinical Trial.大语言模型对诊断推理的影响:一项随机临床试验。
JAMA Netw Open. 2024 Oct 1;7(10):e2440969. doi: 10.1001/jamanetworkopen.2024.40969.
2
An Assessment of How Clinicians and Staff Members Use a Diabetes Artificial Intelligence Prediction Tool: Mixed Methods Study.临床医生和工作人员如何使用糖尿病人工智能预测工具的评估:混合方法研究
JMIR AI. 2023 May 29;2:e45032. doi: 10.2196/45032.
3
The Role of AI in Hospitals and Clinics: Transforming Healthcare in the 21st Century.
人工智能在医院和诊所中的作用:变革21世纪的医疗保健
Bioengineering (Basel). 2024 Mar 29;11(4):337. doi: 10.3390/bioengineering11040337.
4
Evaluating Explanations From AI Algorithms for Clinical Decision-Making: A Social Science-Based Approach.评估人工智能算法在临床决策中的解释:一种基于社会科学的方法。
IEEE J Biomed Health Inform. 2024 Jul;28(7):4269-4280. doi: 10.1109/JBHI.2024.3393719. Epub 2024 Jul 2.
5
Expectation management in AI: A framework for understanding stakeholder trust and acceptance of artificial intelligence systems.人工智能中的期望管理:理解利益相关者对人工智能系统信任与接受度的框架。
Heliyon. 2024 Mar 25;10(7):e28562. doi: 10.1016/j.heliyon.2024.e28562. eCollection 2024 Apr 15.
6
Sustainable deployment of clinical prediction tools-a 360° approach to model maintenance.临床预测工具的可持续部署——模型维护的全方位方法。
J Am Med Inform Assoc. 2024 Apr 19;31(5):1195-1198. doi: 10.1093/jamia/ocae036.
7
A multinational study on artificial intelligence adoption: Clinical implementers' perspectives.一项关于人工智能采用的跨国研究:临床实施者的观点。
Int J Med Inform. 2024 Apr;184:105377. doi: 10.1016/j.ijmedinf.2024.105377. Epub 2024 Feb 15.
8
Radiologists' perspectives on the workflow integration of an artificial intelligence-based computer-aided detection system: A qualitative study.放射科医生对基于人工智能的计算机辅助检测系统工作流程整合的看法:一项定性研究。
Appl Ergon. 2024 May;117:104243. doi: 10.1016/j.apergo.2024.104243. Epub 2024 Feb 1.
9
Integration of AI in healthcare requires an interoperable digital data ecosystem.人工智能在医疗保健领域的整合需要一个可互操作的数字数据生态系统。
Nat Med. 2024 Mar;30(3):631-634. doi: 10.1038/s41591-023-02783-w.
10
Clinical Needs Assessment of a Machine Learning-Based Asthma Management Tool: User-Centered Design Approach.基于机器学习的哮喘管理工具的临床需求评估:以用户为中心的设计方法。
JMIR Form Res. 2024 Jan 15;8:e45391. doi: 10.2196/45391.