Isaacks David B, Borkowski Andrew A
Veterans Affairs Sunshine Healthcare Network, Tampa, Florida.
University of South Florida Morsani College of Medicine, Tampa.
Fed Pract. 2024 Feb;41(2):40-43. doi: 10.12788/fp.0454. Epub 2024 Feb 15.
BACKGROUND: Artificial intelligence (AI) has great potential to improve health care quality, safety, efficiency, and access. However, the widespread adoption of health care AI needs to catch up to other sectors. Challenges, including data limitations, misaligned incentives, and organizational obstacles, have hindered implementation. Strategic demonstrations, partnerships, aligned incentives, and continued investment are needed to enable responsible adoption of AI. High reliability health care organizations offer insights into safely implementing major initiatives through frameworks like the Patient Safety Adoption Framework, which provides practical guidance on leadership, culture, process, measurement, and person-centeredness to successfully adopt safety practices. High reliability health care organizations ensure consistently safe and high quality care through a culture focused on reliability, accountability, and learning from errors and near misses. OBSERVATIONS: The Veterans Health Administration applied a high reliability health care model to instill safety principles and improve outcomes. As the use of AI becomes more widespread, ensuring its ethical development is crucial to avoiding new risks and harm. The US Department of Veterans Affairs National AI Institute proposed a Trustworthy AI Framework tailored for federal health care with 6 principles: purposeful, effective and safe, secure and private, fair and equitable, transparent and explainable, and accountable and monitored. This aims to manage risks and build trust. CONCLUSIONS: Combining these AI principles with high reliability safety principles can enable successful, trustworthy AI that improves health care quality, safety, efficiency, and access. Overcoming AI adoption barriers will require strategic efforts, partnerships, and investment to implement AI responsibly, safely, and equitably based on the health care context.
背景:人工智能(AI)在提高医疗质量、安全性、效率和可及性方面具有巨大潜力。然而,医疗保健领域人工智能的广泛应用仍需迎头赶上其他行业。包括数据限制、激励措施不一致以及组织障碍在内的挑战阻碍了其实施。需要进行战略示范、建立伙伴关系、调整激励措施并持续投资,以实现对人工智能的负责任采用。高可靠性医疗保健组织通过诸如患者安全采用框架等框架,为安全实施重大举措提供了见解,该框架在领导力、文化、流程、衡量以及以患者为中心等方面提供了切实可行的指导,以成功采用安全实践。高可靠性医疗保健组织通过一种注重可靠性、问责制以及从错误和未遂事件中吸取教训的文化,确保始终提供安全且高质量的护理。
观察结果:退伍军人健康管理局应用了高可靠性医疗保健模式来灌输安全原则并改善结果。随着人工智能的使用日益广泛,确保其符合道德规范的发展对于避免新的风险和危害至关重要。美国退伍军人事务部国家人工智能研究所提出了一个专门为联邦医疗保健量身定制的可信人工智能框架,该框架有6项原则:有目的、有效且安全、安全且私密、公平且公正、透明且可解释、可问责且受监督。这旨在管理风险并建立信任。
结论:将这些人工智能原则与高可靠性安全原则相结合,可以实现成功、可信的人工智能,从而提高医疗质量、安全性、效率和可及性。克服人工智能采用障碍需要战略努力、伙伴关系和投资,以便根据医疗保健背景负责任、安全且公平地实施人工智能。
JMIR Med Inform. 2021-6-2
J Toxicol Environ Health B Crit Rev. 2003
N Engl J Med. 2023-7-27
J Patient Saf. 2023-6-1