Suppr超能文献

人工智能辅助前列腺癌诊断的部署安全案例。

A deployment safety case for AI-assisted prostate cancer diagnosis.

作者信息

Jia Yan, Verrill Clare, White Kieron, Dolton Monica, Horton Margaret, Jafferji Mufaddal, Habli Ibrahim

机构信息

Department of Computer Science, University of York, York, YO10 5GH, UK.

Department of Cellular Pathology, Oxford University Hospitals NHS Foundation Trust, John Radcliffe Hospital, Headley Way, Oxford, OX3 9DU, UK; Nuffield Department of Surgical Sciences, University of Oxford, John Radcliffe Hospital, Headley Way, Oxford, OX3 9DU, UK; NIHR Oxford Biomedical Research Centre, Oxford University Hospitals NHS Foundation Trust, John Radcliffe Hospital, Headley Way, Oxford, OX3 9DU, UK.

出版信息

Comput Biol Med. 2025 Jun;192(Pt B):110237. doi: 10.1016/j.compbiomed.2025.110237. Epub 2025 May 8.

Abstract

Deep learning (DL) has the potential to deliver significant clinical benefits. In recent years, an increasing number of DL-based systems have been approved by the relevant regulators, e.g. FDA. Although obtaining regulatory approvals is a prerequisite to deploy such systems for real world use, it may not be sufficient. Regulatory approvals give confidence in the development process for such systems, but new hazardous events can arise depending on how the systems have been deployed in the intended clinical pathways or how they have been used with other systems in complex healthcare settings. These kinds of events can be difficult to predict during the development process. Indeed, most health systems and hospitals require self-verification before deploying a diagnostic medical device, which could be viewed as an additional safety measure. This shows that it is important to carry on assuring the safety of such systems in deployment. In this work, we address this urgent need based on the experience of a prospective study in UK hospitals as part of the ARTICULATE PRO project. In particular, the system considered in this work is developed by Paige for prostate cancer diagnosis, which has obtained FDA approval in the US and UKCA marks in the UK. The methodology presented in this work starts by mapping out the clinical workflow within which the system has been deployed, then carries out hazard and risk analysis based on the clinical workflow, and finally presents a deployment safety case, which provides a basis for deployment and continual monitoring of the safety of this system in use. In this work we systematically address the emergence of new hazardous events from the deployment and to present a way to continually assure the safety of a regulatorily approved system in use.

摘要

深度学习(DL)有潜力带来重大的临床益处。近年来,越来越多基于DL的系统已获得相关监管机构的批准,例如美国食品药品监督管理局(FDA)。尽管获得监管批准是将此类系统用于实际临床的先决条件,但这可能并不够。监管批准让人们对这类系统的开发过程有信心,但根据系统在预期临床路径中的部署方式或它们在复杂医疗环境中与其他系统的使用方式,可能会出现新的危险事件。这类事件在开发过程中可能难以预测。实际上,大多数医疗系统和医院在部署诊断医疗设备前都需要进行自我验证,这可被视为一项额外的安全措施。这表明在部署过程中持续确保此类系统的安全性很重要。在这项工作中,我们基于在英国医院进行的一项前瞻性研究的经验来满足这一迫切需求,该研究是ARTICULATE PRO项目的一部分。具体而言,本工作中所考虑的系统是由佩奇公司开发用于前列腺癌诊断的,它在美国已获得FDA批准,在英国已获得英国合格评定(UKCA)标志。本工作中提出的方法首先梳理出系统所部署的临床工作流程,然后基于该临床工作流程进行危害和风险分析,最后给出一个部署安全案例,为该系统在实际使用中的部署和持续安全监测提供依据。在这项工作中,我们系统地解决了因部署而出现的新危险事件,并提出一种方法来持续确保已获得监管批准的系统在实际使用中的安全性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6c41/12131227/ca85facb7d08/gr1.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验