• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于瓦瑟斯坦距离的梯度深度泄漏

Wasserstein Distance-Based Deep Leakage from Gradients.

作者信息

Wang Zifan, Peng Changgen, He Xing, Tan Weijie

机构信息

State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China.

Guizhou Big Data Academy, Guizhou University, Guiyang 550025, China.

出版信息

Entropy (Basel). 2023 May 17;25(5):810. doi: 10.3390/e25050810.

DOI:10.3390/e25050810
PMID:37238565
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10217429/
Abstract

Federated learning protects the privacy information in the data set by sharing the average gradient. However, "Deep Leakage from Gradient" (DLG) algorithm as a gradient-based feature reconstruction attack can recover privacy training data using gradients shared in federated learning, resulting in private information leakage. However, the algorithm has the disadvantages of slow model convergence and poor inverse generated images accuracy. To address these issues, a Wasserstein distance-based DLG method is proposed, named WDLG. The WDLG method uses Wasserstein distance as the training loss function achieved to improve the inverse image quality and the model convergence. The hard-to-calculate Wasserstein distance is converted to be calculated iteratively using the Lipschit condition and Kantorovich-Rubinstein duality. Theoretical analysis proves the differentiability and continuity of Wasserstein distance. Finally, experiment results show that the WDLG algorithm is superior to DLG in training speed and inversion image quality. At the same time, we prove through the experiments that differential privacy can be used for disturbance protection, which provides some ideas for the development of a deep learning framework to protect privacy.

摘要

联邦学习通过共享平均梯度来保护数据集中的隐私信息。然而,“梯度深度泄漏”(DLG)算法作为一种基于梯度的特征重建攻击,可以利用联邦学习中共享的梯度恢复隐私训练数据,从而导致隐私信息泄露。然而,该算法存在模型收敛速度慢和逆生成图像准确性差的缺点。为了解决这些问题,提出了一种基于瓦瑟斯坦距离的DLG方法,称为WDLG。WDLG方法使用瓦瑟斯坦距离作为训练损失函数,以提高逆图像质量和模型收敛性。通过利普希茨条件和康托罗维奇-鲁宾斯坦对偶性,将难以计算的瓦瑟斯坦距离转换为可迭代计算的形式。理论分析证明了瓦瑟斯坦距离的可微性和连续性。最后,实验结果表明,WDLG算法在训练速度和逆图像质量方面优于DLG。同时,我们通过实验证明了差分隐私可用于干扰保护,这为开发保护隐私的深度学习框架提供了一些思路。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fae/10217429/1f6722d38800/entropy-25-00810-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fae/10217429/de07490f5012/entropy-25-00810-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fae/10217429/5ee31bd76ee9/entropy-25-00810-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fae/10217429/f5fc6eb5573d/entropy-25-00810-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fae/10217429/52705a7cb69b/entropy-25-00810-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fae/10217429/6b1746d873cb/entropy-25-00810-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fae/10217429/06343bfdf123/entropy-25-00810-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fae/10217429/15eddf279226/entropy-25-00810-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fae/10217429/0b52b93380cb/entropy-25-00810-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fae/10217429/2e4f57bf4287/entropy-25-00810-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fae/10217429/5348ce4a6fde/entropy-25-00810-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fae/10217429/9bcf702fc996/entropy-25-00810-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fae/10217429/0ff657d00286/entropy-25-00810-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fae/10217429/69b5ce1e5734/entropy-25-00810-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fae/10217429/1f6722d38800/entropy-25-00810-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fae/10217429/de07490f5012/entropy-25-00810-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fae/10217429/5ee31bd76ee9/entropy-25-00810-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fae/10217429/f5fc6eb5573d/entropy-25-00810-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fae/10217429/52705a7cb69b/entropy-25-00810-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fae/10217429/6b1746d873cb/entropy-25-00810-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fae/10217429/06343bfdf123/entropy-25-00810-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fae/10217429/15eddf279226/entropy-25-00810-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fae/10217429/0b52b93380cb/entropy-25-00810-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fae/10217429/2e4f57bf4287/entropy-25-00810-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fae/10217429/5348ce4a6fde/entropy-25-00810-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fae/10217429/9bcf702fc996/entropy-25-00810-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fae/10217429/0ff657d00286/entropy-25-00810-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fae/10217429/69b5ce1e5734/entropy-25-00810-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fae/10217429/1f6722d38800/entropy-25-00810-g014.jpg

相似文献

1
Wasserstein Distance-Based Deep Leakage from Gradients.基于瓦瑟斯坦距离的梯度深度泄漏
Entropy (Basel). 2023 May 17;25(5):810. doi: 10.3390/e25050810.
2
Recover User's Private Training Image Data by Gradient in Federated Learning.联邦学习中通过梯度恢复用户的私有训练图像数据。
Sensors (Basel). 2022 Sep 21;22(19):7157. doi: 10.3390/s22197157.
3
Wasserstein Generative Adversarial Networks Based Differential Privacy Metaverse Data Sharing.基于 Wasserstein 生成对抗网络的差分隐私元宇宙数据共享。
IEEE J Biomed Health Inform. 2024 Nov;28(11):6348-6359. doi: 10.1109/JBHI.2023.3287092. Epub 2024 Nov 6.
4
Multi-site fMRI analysis using privacy-preserving federated learning and domain adaptation: ABIDE results.使用隐私保护联邦学习和域适应的多站点功能磁共振成像分析:ABIDE研究结果
Med Image Anal. 2020 Oct;65:101765. doi: 10.1016/j.media.2020.101765. Epub 2020 Jul 2.
5
Stable and Fast Deep Mutual Information Maximization Based on Wasserstein Distance.基于瓦瑟斯坦距离的稳定快速深度互信息最大化
Entropy (Basel). 2023 Nov 30;25(12):1607. doi: 10.3390/e25121607.
6
Privacy-enhanced momentum federated learning via differential privacy and chaotic system in industrial Cyber-Physical systems.工业信息物理系统中基于差分隐私和混沌系统的隐私增强动量联邦学习
ISA Trans. 2022 Sep;128(Pt A):17-31. doi: 10.1016/j.isatra.2021.09.007. Epub 2021 Sep 13.
7
Generative Image Reconstruction From Gradients.基于梯度的生成式图像重建
IEEE Trans Neural Netw Learn Syst. 2025 Jan;36(1):21-31. doi: 10.1109/TNNLS.2024.3383722. Epub 2025 Jan 7.
8
Do Gradient Inversion Attacks Make Federated Learning Unsafe?梯度反转攻击是否使联邦学习变得不安全?
IEEE Trans Med Imaging. 2023 Jul;42(7):2044-2056. doi: 10.1109/TMI.2023.3239391. Epub 2023 Jun 30.
9
WDA: An Improved Wasserstein Distance-Based Transfer Learning Fault Diagnosis Method.WDA:一种改进的基于 Wasserstein 距离的迁移学习故障诊断方法。
Sensors (Basel). 2021 Jun 26;21(13):4394. doi: 10.3390/s21134394.
10
Understanding Deep Gradient Leakage via Inversion Influence Functions.通过反演影响函数理解深度梯度泄漏
Adv Neural Inf Process Syst. 2023 Dec;36:3921-3944.

引用本文的文献

1
A robust and personalized privacy-preserving approach for adaptive clustered federated distillation.一种用于自适应聚类联邦蒸馏的强大且个性化的隐私保护方法。
Sci Rep. 2025 Apr 23;15(1):14069. doi: 10.1038/s41598-025-96468-8.

本文引用的文献

1
Developing and Validating a Survival Prediction Model for NSCLC Patients Through Distributed Learning Across 3 Countries.通过三个国家的分布式学习开发并验证非小细胞肺癌患者的生存预测模型
Int J Radiat Oncol Biol Phys. 2017 Oct 1;99(2):344-352. doi: 10.1016/j.ijrobp.2017.04.021. Epub 2017 Apr 24.
2
Distributed learning: Developing a predictive model based on data from multiple hospitals without data leaving the hospital - A real life proof of concept.分布式学习:基于多家医院的数据开发预测模型且数据不出医院——一个实际的概念验证。
Radiother Oncol. 2016 Dec;121(3):459-467. doi: 10.1016/j.radonc.2016.10.002. Epub 2016 Oct 28.