Department of Biomedical Informatics and Medical Education, University of Washington School of Medicine, Seattle, Washington.
Department of Radiology, University of Washington School of Medicine, Seattle, Washington.
J Am Coll Radiol. 2024 Oct;21(10):1569-1574. doi: 10.1016/j.jacr.2024.04.027. Epub 2024 May 22.
With promising artificial intelligence (AI) algorithms receiving FDA clearance, the potential impact of these models on clinical outcomes must be evaluated locally before their integration into routine workflows. Robust validation infrastructures are pivotal to inspecting the accuracy and generalizability of these deep learning algorithms to ensure both patient safety and health equity. Protected health information concerns, intellectual property rights, and diverse requirements of models impede the development of rigorous external validation infrastructures. The authors propose various suggestions for addressing the challenges associated with the development of efficient, customizable, and cost-effective infrastructures for the external validation of AI models at large medical centers and institutions. The authors present comprehensive steps to establish an AI inferencing infrastructure outside clinical systems to examine the local performance of AI algorithms before health practice or systemwide implementation and promote an evidence-based approach for adopting AI models that can enhance radiology workflows and improve patient outcomes.
随着有前途的人工智能 (AI) 算法获得 FDA 批准,在将这些模型集成到常规工作流程之前,必须在本地评估它们对临床结果的潜在影响。稳健的验证基础设施对于检查这些深度学习算法的准确性和泛化能力至关重要,以确保患者安全和医疗公平。受保护的健康信息问题、知识产权以及模型的多样化要求阻碍了严格的外部验证基础设施的发展。作者提出了各种建议,以解决在大型医疗中心和机构中开发用于 AI 模型外部验证的高效、可定制和具有成本效益的基础设施所面临的挑战。作者提出了在临床系统之外建立人工智能推理基础设施的综合步骤,以在医疗实践或系统范围实施之前检查人工智能算法的本地性能,并促进采用人工智能模型的循证方法,这些模型可以增强放射科工作流程并改善患者预后。