Geukes Foppen Remco Jan, Gioia Vincenzo, Gupta Shreya, Johnson Curtis L, Giantsidis John, Papademetris Maria
Independent, Anzio, Italy.
Independent, Salerno, Italy.
J Diabetes Sci Technol. 2025 May;19(3):620-627. doi: 10.1177/19322968241304434. Epub 2024 Dec 26.
The use of artificial intelligence (AI) in diabetes management is emerging as a promising solution to improve the monitoring and personalization of therapies. However, the integration of such technologies in the clinical setting poses significant challenges related to safety, security, and compliance with sensitive patient data, as well as the potential direct consequences on patient health. This article provides guidance for developers and researchers on identifying and addressing these safety, security, and compliance challenges in AI systems for diabetes management. We emphasize the role of explainable AI (xAI) systems as the foundational strategy for ensuring security and compliance, fostering user trust, and informed clinical decision-making which is paramount in diabetes care solutions. The article examines both the technical and regulatory dimensions essential for developing explainable applications in this field. Technically, we demonstrate how understanding the lifecycle phases of AI systems aids in constructing xAI frameworks while addressing security concerns and implementing risk mitigation strategies at each stage. In addition, from a regulatory perspective, we analyze key Governance, Risk, and Compliance (GRC) standards established by entities, such as the Food and Drug Administration (FDA), providing specific guidelines to ensure safety, efficacy, and ethical integrity in AI-enabled diabetes care applications. By addressing these interconnected aspects, this article aims to deliver actionable insights and methodologies for developing trustworthy AI-enabled diabetes care solutions while ensuring safety, efficacy, and compliance with ethical standards to enhance patient engagement and improve clinical outcomes.
人工智能(AI)在糖尿病管理中的应用正逐渐成为一种有前景的解决方案,可改善治疗的监测和个性化。然而,将此类技术整合到临床环境中会带来与安全、安保以及对患者敏感数据的合规性相关的重大挑战,以及对患者健康的潜在直接影响。本文为开发者和研究人员提供指导,以识别和应对糖尿病管理AI系统中的这些安全、安保和合规挑战。我们强调可解释人工智能(xAI)系统的作用,它是确保安全和合规、增强用户信任以及做出明智临床决策的基础策略,而这在糖尿病护理解决方案中至关重要。本文探讨了在该领域开发可解释应用所需的技术和监管层面。从技术角度,我们展示了理解AI系统的生命周期阶段如何有助于构建xAI框架,同时在每个阶段解决安全问题并实施风险缓解策略。此外,从监管角度,我们分析了诸如美国食品药品监督管理局(FDA)等机构制定的关键治理、风险和合规(GRC)标准,提供确保基于AI的糖尿病护理应用的安全性、有效性和道德完整性的具体指南。通过解决这些相互关联的方面,本文旨在提供可操作的见解和方法,以开发值得信赖的基于AI的糖尿病护理解决方案,同时确保安全性、有效性并符合道德标准,以增强患者参与度并改善临床结果。