Pati Sarthak, Kumar Sourav, Varma Amokh, Edwards Brandon, Lu Charles, Qu Liangqiong, Wang Justin J, Lakshminarayanan Anantharaman, Wang Shih-Han, Sheller Micah J, Chang Ken, Singh Praveer, Rubin Daniel L, Kalpathy-Cramer Jayashree, Bakas Spyridon
Center for Federated Learning in Medicine, Indiana University, Indianapolis, IN, USA.
Division of Computational Pathology, Department of Pathology and Laboratory Medicine, Indiana University School of Medicine, Indianapolis, IN, USA.
Patterns (N Y). 2024 Jul 12;5(7):100974. doi: 10.1016/j.patter.2024.100974.
Artificial intelligence (AI) shows potential to improve health care by leveraging data to build models that can inform clinical workflows. However, access to large quantities of diverse data is needed to develop robust generalizable models. Data sharing across institutions is not always feasible due to legal, security, and privacy concerns. Federated learning (FL) allows for multi-institutional training of AI models, obviating data sharing, albeit with different security and privacy concerns. Specifically, insights exchanged during FL can leak information about institutional data. In addition, FL can introduce issues when there is limited trust among the entities performing the compute. With the growing adoption of FL in health care, it is imperative to elucidate the potential risks. We thus summarize privacy-preserving FL literature in this work with special regard to health care. We draw attention to threats and review mitigation approaches. We anticipate this review to become a health-care researcher's guide to security and privacy in FL.
人工智能(AI)通过利用数据构建可指导临床工作流程的模型,展现出改善医疗保健的潜力。然而,要开发强大的通用模型,需要获取大量多样的数据。由于法律、安全和隐私问题,跨机构的数据共享并不总是可行的。联邦学习(FL)允许对人工智能模型进行多机构训练,避免了数据共享,尽管存在不同的安全和隐私问题。具体而言,在联邦学习过程中交换的见解可能会泄露有关机构数据的信息。此外,当进行计算的实体之间信任有限时,联邦学习可能会引发问题。随着联邦学习在医疗保健领域的日益普及,阐明潜在风险势在必行。因此,我们在这项工作中总结了隐私保护联邦学习的文献,特别关注医疗保健领域。我们提请注意威胁并审查缓解方法。我们预计这篇综述将成为医疗保健研究人员在联邦学习中的安全和隐私指南。