Suppr超能文献

一种用于安全和隐私神经影像分析的联邦学习架构。

A federated learning architecture for secure and private neuroimaging analysis.

作者信息

Stripelis Dimitris, Gupta Umang, Saleem Hamza, Dhinagar Nikhil, Ghai Tanmay, Anastasiou Chrysovalantis, Sánchez Rafael, Steeg Greg Ver, Ravi Srivatsan, Naveed Muhammad, Thompson Paul M, Ambite José Luis

机构信息

University of Southern California, Information Sciences Institute, Marina del Rey, CA 90292, USA.

University of Southern California, Computer Science Department, Los Angeles, CA 90089, USA.

出版信息

Patterns (N Y). 2024 Aug 1;5(8):101031. doi: 10.1016/j.patter.2024.101031. eCollection 2024 Aug 9.

Abstract

The amount of biomedical data continues to grow rapidly. However, collecting data from multiple sites for joint analysis remains challenging due to security, privacy, and regulatory concerns. To overcome this challenge, we use federated learning, which enables distributed training of neural network models over multiple data sources without sharing data. Each site trains the neural network over its private data for some time and then shares the neural network parameters (i.e., weights and/or gradients) with a federation controller, which in turn aggregates the local models and sends the resulting community model back to each site, and the process repeats. Our federated learning architecture, MetisFL, provides strong security and privacy. First, sample data never leave a site. Second, neural network parameters are encrypted before transmission and the global neural model is computed under fully homomorphic encryption. Finally, we use information-theoretic methods to limit information leakage from the neural model to prevent a "curious" site from performing model inversion or membership attacks. We present a thorough evaluation of the performance of secure, private federated learning in neuroimaging tasks, including for predicting Alzheimer's disease and for brain age gap estimation (BrainAGE) from magnetic resonance imaging (MRI) studies in challenging, heterogeneous federated environments where sites have different amounts of data and statistical distributions.

摘要

生物医学数据量持续快速增长。然而,由于安全、隐私和监管方面的担忧,从多个站点收集数据进行联合分析仍然具有挑战性。为了克服这一挑战,我们使用联邦学习,它能够在多个数据源上对神经网络模型进行分布式训练,而无需共享数据。每个站点在其私有数据上训练神经网络一段时间,然后与联邦控制器共享神经网络参数(即权重和/或梯度),联邦控制器反过来聚合本地模型,并将生成的社区模型发送回每个站点,然后重复该过程。我们的联邦学习架构MetisFL提供了强大的安全性和隐私性。首先,采样数据永远不会离开站点。其次,神经网络参数在传输前进行加密,全局神经模型在全同态加密下进行计算。最后,我们使用信息论方法来限制神经模型的信息泄露,以防止“好奇”的站点进行模型反演或成员攻击。我们对安全、隐私的联邦学习在神经成像任务中的性能进行了全面评估,包括在具有挑战性的异构联邦环境中,从磁共振成像(MRI)研究中预测阿尔茨海默病和估计脑年龄差距(BrainAGE),在这些环境中,站点拥有不同数量的数据和统计分布。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/afe2/11368680/41bcf1dae245/gr1.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验