Department of Cancer and Surgery, Imperial College London, UK.
Department of Cancer and Surgery, Imperial College London, UK.
Clin Radiol. 2024 May;79(5):338-345. doi: 10.1016/j.crad.2024.01.026. Epub 2024 Feb 8.
The implementation of artificial intelligence (AI) applications in routine practice, following regulatory approval, is currently limited by practical concerns around reliability, accountability, trust, safety, and governance, in addition to factors such as cost-effectiveness and institutional information technology support. When a technology is new and relatively untested in a field, professional confidence is lacking and there is a sense of the need to go above the baseline level of validation and compliance. In this article, we propose an approach that goes beyond standard regulatory compliance for AI apps that are approved for marketing, including independent benchmarking in the lab as well as clinical audit in practice, with the aims of increasing trust and preventing harm.
人工智能(AI)应用在常规实践中的实施,在监管批准后,目前受到可靠性、问责制、信任、安全性和治理等方面的实际问题的限制,除此之外还受到成本效益和机构信息技术支持等因素的限制。当一项技术在一个领域中是新的且相对未经测试时,专业人员的信心就会不足,并且会有一种感觉,即需要超越已批准上市的 AI 应用的标准监管合规,进行实验室独立基准测试以及实践中的临床审核,目的是增加信任并防止伤害。