Suppr超能文献

医学成像中的隐私保护联邦学习与不确定性量化

Privacy-preserving Federated Learning and Uncertainty Quantification in Medical Imaging.

作者信息

Koutsoubis Nikolas, Waqas Asim, Yilmaz Yasin, Ramachandran Ravi P, Schabath Matthew B, Rasool Ghulam

机构信息

Department of Machine Learning, Moffitt Cancer Center, Tampa, Fla.

Department of Electrical Engineering, University of South Florida, 4202 E Fowler Ave, Tampa, FL 33620-9951.

出版信息

Radiol Artif Intell. 2025 May 14:e240637. doi: 10.1148/ryai.240637.

Abstract

Artificial Intelligence (AI) has demonstrated strong potential in automating medical imaging tasks, with potential applications across disease diagnosis, prognosis, treatment planning, and posttreatment surveillance. However, privacy concerns surrounding patient data remain a major barrier to the widespread adoption of AI in clinical practice, as large and diverse training datasets are essential for developing accurate, robust, and generalizable AI models. Federated Learning offers a privacy-preserving solution by enabling collaborative model training across institutions without sharing sensitive data. Instead, model parameters, such as model weights, are exchanged between participating sites. Despite its potential, federated learning is still in its early stages of development and faces several challenges. Notably, sensitive information can still be inferred from the shared model parameters. Additionally, postdeployment data distribution shifts can degrade model performance, making uncertainty quantification essential. In federated learning, this task is particularly challenging due to data heterogeneity across participating sites. This review provides a comprehensive overview of federated learning, privacy-preserving federated learning, and uncertainty quantification in federated learning. Key limitations in current methodologies are identified, and future research directions are proposed to enhance data privacy and trustworthiness in medical imaging applications. ©RSNA, 2025.

摘要

人工智能(AI)在医疗成像任务自动化方面已展现出强大潜力,其潜在应用涵盖疾病诊断、预后评估、治疗规划及治疗后监测等领域。然而,围绕患者数据的隐私问题仍是AI在临床实践中广泛应用的主要障碍,因为大规模且多样的训练数据集对于开发准确、稳健且通用的AI模型至关重要。联邦学习提供了一种隐私保护解决方案,通过跨机构协作进行模型训练而不共享敏感数据。相反,诸如模型权重等模型参数在参与站点之间进行交换。尽管具有潜力,但联邦学习仍处于发展初期且面临若干挑战。值得注意的是,敏感信息仍可从共享的模型参数中推断出来。此外,部署后的数据分布变化会降低模型性能,使得不确定性量化至关重要。在联邦学习中,由于参与站点之间的数据异质性,这项任务尤其具有挑战性。本综述全面概述了联邦学习、隐私保护联邦学习以及联邦学习中的不确定性量化。识别了当前方法中的关键局限性,并提出了未来的研究方向,以增强医学成像应用中的数据隐私和可信度。©RSNA,2025年。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验