Suppr超能文献

稳健且保护隐私的去中心化深度联邦学习训练:专注于数字医疗保健应用。

Robust and Privacy-Preserving Decentralized Deep Federated Learning Training: Focusing on Digital Healthcare Applications.

出版信息

IEEE/ACM Trans Comput Biol Bioinform. 2024 Jul-Aug;21(4):890-901. doi: 10.1109/TCBB.2023.3243932. Epub 2024 Aug 8.

Abstract

Federated learning of deep neural networks has emerged as an evolving paradigm for distributed machine learning, gaining widespread attention due to its ability to update parameters without collecting raw data from users, especially in digital healthcare applications. However, the traditional centralized architecture of federated learning suffers from several problems (e.g., single point of failure, communication bottlenecks, etc.), especially malicious servers inferring gradients and causing gradient leakage. To tackle the above issues, we propose a robust and privacy-preserving decentralized deep federated learning (RPDFL) training scheme. Specifically, we design a novel ring FL structure and a Ring-Allreduce-based data sharing scheme to improve the communication efficiency in RPDFL training. Furthermore, we improve the process of distributing parameters of the Chinese residual theorem to update the execution process of the threshold secret sharing, supporting healthcare edge to drop out during the training process without causing data leakage, and ensuring the robustness of the RPDFL training under the Ring-Allreduce-based data sharing scheme. Security analysis indicates that RPDFL is provable secure. Experiment results show that RPDFL is significantly superior to standard FL methods in terms of model accuracy and convergence, and is suitable for digital healthcare applications.

摘要

深度神经网络的联邦学习已成为分布式机器学习的一种新兴范例,由于其能够在不从用户处收集原始数据的情况下更新参数,特别是在数字医疗保健应用中,因此受到了广泛关注。然而,联邦学习的传统集中式架构存在多个问题(例如单点故障、通信瓶颈等),尤其是恶意服务器推断梯度并导致梯度泄露。为了解决上述问题,我们提出了一种稳健且保护隐私的去中心化深度联邦学习(RPDFL)训练方案。具体来说,我们设计了一种新颖的环形联邦学习结构和基于环形全归约的数据共享方案,以提高 RPDFL 训练中的通信效率。此外,我们改进了分发中国剩余定理参数的过程,以更新门限秘密共享的执行过程,支持医疗保健边缘在训练过程中退出而不会导致数据泄露,并确保在基于环形全归约的数据共享方案下 RPDFL 训练的稳健性。安全分析表明,RPDFL 是可证明安全的。实验结果表明,RPDFL 在模型准确性和收敛性方面明显优于标准的联邦学习方法,适用于数字医疗保健应用。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验