Section of General Internal Medicine, Department of Internal Medicine, Yale School of Medicine, New Haven, CT, USA.
Yale Center for Outcomes Research and Evaluation (CORE), 195 Church Street, New Haven, CT, 06510, USA.
BMC Med Inform Decis Mak. 2024 Sep 4;24(1):247. doi: 10.1186/s12911-024-02653-6.
Artificial intelligence (AI) is increasingly used for prevention, diagnosis, monitoring, and treatment of cardiovascular diseases. Despite the potential for AI to improve care, ethical concerns and mistrust in AI-enabled healthcare exist among the public and medical community. Given the rapid and transformative recent growth of AI in cardiovascular care, to inform practice guidelines and regulatory policies that facilitate ethical and trustworthy use of AI in medicine, we conducted a literature review to identify key ethical and trust barriers and facilitators from patients' and healthcare providers' perspectives when using AI in cardiovascular care.
In this rapid literature review, we searched six bibliographic databases to identify publications discussing transparency, trust, or ethical concerns (outcomes of interest) associated with AI-based medical devices (interventions of interest) in the context of cardiovascular care from patients', caregivers', or healthcare providers' perspectives. The search was completed on May 24, 2022 and was not limited by date or study design.
After reviewing 7,925 papers from six databases and 3,603 papers identified through citation chasing, 145 articles were included. Key ethical concerns included privacy, security, or confidentiality issues (n = 59, 40.7%); risk of healthcare inequity or disparity (n = 36, 24.8%); risk of patient harm (n = 24, 16.6%); accountability and responsibility concerns (n = 19, 13.1%); problematic informed consent and potential loss of patient autonomy (n = 17, 11.7%); and issues related to data ownership (n = 11, 7.6%). Major trust barriers included data privacy and security concerns, potential risk of patient harm, perceived lack of transparency about AI-enabled medical devices, concerns about AI replacing human aspects of care, concerns about prioritizing profits over patients' interests, and lack of robust evidence related to the accuracy and limitations of AI-based medical devices. Ethical and trust facilitators included ensuring data privacy and data validation, conducting clinical trials in diverse cohorts, providing appropriate training and resources to patients and healthcare providers and improving their engagement in different phases of AI implementation, and establishing further regulatory oversights.
This review revealed key ethical concerns and barriers and facilitators of trust in AI-enabled medical devices from patients' and healthcare providers' perspectives. Successful integration of AI into cardiovascular care necessitates implementation of mitigation strategies. These strategies should focus on enhanced regulatory oversight on the use of patient data and promoting transparency around the use of AI in patient care.
人工智能(AI)越来越多地用于预防、诊断、监测和治疗心血管疾病。尽管人工智能有可能改善医疗服务,但公众和医疗界对 AI 支持的医疗保健存在伦理问题和信任缺失。鉴于人工智能在心血管护理方面的快速变革性增长,为了制定实践指南和监管政策,以促进在医学中使用 AI 的伦理和可信性,我们进行了文献综述,从患者和医疗保健提供者的角度确定在心血管护理中使用 AI 时的关键伦理和信任障碍和促进因素。
在这项快速文献综述中,我们从患者、护理人员或医疗保健提供者的角度,从六个文献数据库中搜索了讨论与基于 AI 的医疗器械(干预措施)相关的透明度、信任或伦理问题(感兴趣的结果)的出版物。搜索于 2022 年 5 月 24 日完成,且不受日期或研究设计的限制。
从六个数据库中审查了 7925 篇论文和通过引文追踪确定的 3603 篇论文后,纳入了 145 篇文章。关键的伦理问题包括隐私、安全或保密性问题(n=59,40.7%);医疗保健不公平或差距的风险(n=36,24.8%);患者伤害的风险(n=24,16.6%);问责制和责任问题(n=19,13.1%);有问题的知情同意和潜在的患者自主权丧失(n=17,11.7%);以及与数据所有权相关的问题(n=11,7.6%)。主要的信任障碍包括数据隐私和安全问题、患者伤害的潜在风险、对 AI 支持的医疗器械缺乏透明度的看法、对 AI 取代医疗保健人员的关注、对利润优先于患者利益的关注,以及与基于 AI 的医疗器械的准确性和局限性相关的证据不足。伦理和信任的促进因素包括确保数据隐私和数据验证、在不同队列中进行临床试验、为患者和医疗保健提供者提供适当的培训和资源,并提高他们在 AI 实施的不同阶段的参与度,以及建立进一步的监管监督。
这项综述揭示了患者和医疗保健提供者从 AI 支持的医疗器械的角度来看,关键的伦理问题和信任障碍及促进因素。要成功地将 AI 融入心血管护理,就必须实施缓解策略。这些策略应侧重于加强对患者数据使用的监管监督,并促进透明度,说明 AI 在患者护理中的使用。