Institute of Digital Healthcare, WMG, University of Warwick, UK.
Stud Health Technol Inform. 2022 Jan 14;289:14-17. doi: 10.3233/SHTI210847.
Artificial Intelligence (AI) has seen an increased application within digital healthcare interventions (DHIs). DHIs use entails challenges about their safety assurance. Exacerbated by regulatory requirements, in the UK, this places the onus of safety assurance not only on the manufacturer, but also on the operator of a DHI. Clinical Safety claims and evidencing safe implementation and use of AI-based DHIs require expertise, to understand and act to control or mitigate risk. Current health software standards, regulation, and guidance do not provide the insight necessary for safer implementation.
To interpret published guidance and policy related to AI and justify clinical safety assurance of DHIs.
Assessment of UK health regulation policy, standards, and AI institution insights, utilizing a published Hazard Assessment framework, to structure safety justifications, and articulate hazards relating to AI-based DHIs.
AI enabled DHI hazard identification, relating to implementation and use within healthcare delivery organizations.
By application of the method, we postulate that UK research of AI DHIs highlighted issues that may affect safety, in need of consideration to justify safety of a DHI.
人工智能(AI)在数字医疗干预措施(DHIs)中得到了越来越多的应用。DHIs 的使用带来了关于其安全性保证的挑战。在英国,监管要求加剧了这一挑战,不仅制造商,而且 DHI 的操作人员都有责任确保安全性。临床安全声明和证明基于人工智能的 DHIs 的安全实施和使用需要专业知识,以了解和采取行动来控制或减轻风险。当前的卫生软件标准、法规和指南没有提供更安全实施所需的洞察力。
解释与 AI 相关的已发布指南和政策,并证明 DHIs 的临床安全性。
评估英国卫生监管政策、标准和 AI 机构的见解,利用已发布的危害评估框架,构建安全性论证,并阐述与基于 AI 的 DHIs 相关的危害。
AI 使 DHI 的危害识别得以实现,与医疗保健提供组织内的实施和使用有关。
通过该方法的应用,我们推测英国对 AI DHIs 的研究突出了可能影响安全性的问题,需要考虑以证明 DHI 的安全性。