Wells Brian J, Nguyen Hieu M, McWilliams Andrew, Pallini Matt, Bovi Amy, Kuzma Andrew, Kramer Justin, Chou Shih-Hsiung, Hetherington Timothy, Corn Patricia, Taylor Yhenneko J, Cuison Audrey, Gagen Mary, Isreal McKenzie
Department of Biostatistics and Data Science, Wake Forest University School of Medicine, Winston-Salem, NC, USA.
Center for Health System Sciences, Atrium Health, Charlotte, NC, USA.
NPJ Digit Med. 2025 Aug 11;8(1):514. doi: 10.1038/s41746-025-01900-y.
Health systems face the challenge of balancing innovation and safety to responsibly implement artificial intelligence (AI) solutions. The rapid proliferation, growing complexity, ethical considerations, and rising demand for these tools require timely and efficient processes for rigorous evaluation and ongoing monitoring. Current AI evaluation frameworks often lack the practical guidance for health systems to address these challenges. To fill this gap, we developed a prescriptive evaluation framework informed by a literature review, in-depth interviews with key stakeholders, including patients, and a multidisciplinary design workshop. The resulting framework provides health systems an outline of the resources, structures, criteria, and template documents to enable pre-implementation evaluation and post-implementation monitoring of AI solutions. Health systems will need to treat this or any alternative framework as a living document to maintain relevance and effectiveness as the AI landscape and regulations continue to evolve.
卫生系统面临着在创新与安全之间取得平衡,以负责任地实施人工智能(AI)解决方案的挑战。这些工具的迅速扩散、日益复杂、伦理考量以及不断增长的需求,需要及时且高效的流程来进行严格评估和持续监测。当前的人工智能评估框架往往缺乏为卫生系统应对这些挑战提供的实际指导。为了填补这一空白,我们通过文献综述、对包括患者在内的关键利益相关者进行深入访谈以及多学科设计研讨会,开发了一个规范性评估框架。由此产生的框架为卫生系统提供了资源、结构、标准和模板文件的大纲,以实现对人工智能解决方案的实施前评估和实施后监测。随着人工智能格局和法规不断演变,卫生系统需要将这个或任何替代框架视为一份动态文件,以保持其相关性和有效性。