• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

隐私掩蔽随机梯度推进算法在分布式在线优化中的应用。

Privacy Masking Stochastic Subgradient-Push Algorithm for Distributed Online Optimization.

出版信息

IEEE Trans Cybern. 2021 Jun;51(6):3224-3237. doi: 10.1109/TCYB.2020.2973221. Epub 2021 May 18.

DOI:10.1109/TCYB.2020.2973221
PMID:32149669
Abstract

This article investigates the problem of distributed online optimization for a group of units communicating on time-varying unbalanced directed networks. The main target of the set of units is to cooperatively minimize the sum of all locally known convex cost functions (global cost function) while pursuing the privacy of their local cost functions being well masked. To address such optimization problems in a collaborative and distributed fashion, a differentially private-distributed stochastic subgradient-push algorithm, called DP-DSSP, is proposed, which ensures that units interact with in-neighbors and collectively optimize the global cost function. Unlike most of the existing distributed algorithms which do not consider privacy issues, DP-DSSP via differential privacy strategy successfully masks the privacy of participating units, which is more practical in applications involving sensitive messages, such as military affairs or medical treatment. An important feature of DP-DSSP is tackling distributed online optimization problems under the circumstance of time-varying unbalanced directed networks. Theoretical analysis indicates that DP-DSSP can effectively mask differential privacy as well as can achieve sublinear regrets. A compromise between the privacy levels and the accuracy of DP-DSSP is also revealed. Furthermore, DP-DSSP is capable of handling arbitrarily large but uniformly bounded delays in the communication links. Finally, simulation experiments confirm the practicability of DP-DSSP and the findings in this article.

摘要

本文研究了在时变非平衡有向网络上进行分布式在线优化的问题。一组单元的主要目标是通过合作最小化所有局部已知凸代价函数的总和(全局代价函数),同时对其局部代价函数的隐私进行良好的屏蔽。为了以协作和分布式的方式解决此类优化问题,提出了一种称为 DP-DSSP 的具有差分隐私的分布式随机子梯度推进算法,该算法确保单元与内部邻居进行交互,并共同优化全局代价函数。与大多数不考虑隐私问题的现有分布式算法不同,DP-DSSP 通过差分隐私策略成功屏蔽了参与单元的隐私,这在涉及敏感信息(如军事或医疗)的应用中更加实用。DP-DSSP 的一个重要特点是在时变非平衡有向网络的情况下处理分布式在线优化问题。理论分析表明,DP-DSSP 可以有效地屏蔽差分隐私,同时可以实现次线性遗憾。还揭示了 DP-DSSP 的隐私级别和准确性之间的折衷。此外,DP-DSSP 能够处理通信链路中任意大但均匀有界的延迟。最后,仿真实验证实了 DP-DSSP 的实用性和本文的研究结果。

相似文献

1
Privacy Masking Stochastic Subgradient-Push Algorithm for Distributed Online Optimization.隐私掩蔽随机梯度推进算法在分布式在线优化中的应用。
IEEE Trans Cybern. 2021 Jun;51(6):3224-3237. doi: 10.1109/TCYB.2020.2973221. Epub 2021 May 18.
2
Distributed Online Constrained Optimization With Feedback Delays.具有反馈延迟的分布式在线约束优化
IEEE Trans Neural Netw Learn Syst. 2024 Feb;35(2):1708-1720. doi: 10.1109/TNNLS.2022.3184957. Epub 2024 Feb 5.
3
Push-Sum Distributed Online Optimization With Bandit Feedback.具有博弈反馈的推和分布式在线优化
IEEE Trans Cybern. 2022 Apr;52(4):2263-2273. doi: 10.1109/TCYB.2020.2999309. Epub 2022 Apr 5.
4
Privacy Preservation in Distributed Subgradient Optimization Algorithms.分布式次梯度优化算法中的隐私保护。
IEEE Trans Cybern. 2018 Jul;48(7):2154-2165. doi: 10.1109/TCYB.2017.2728644. Epub 2017 Jul 31.
5
A(DP) SGD: Asynchronous Decentralized Parallel Stochastic Gradient Descent With Differential Privacy.异步去中心化并行随机梯度下降与差分隐私。
IEEE Trans Pattern Anal Mach Intell. 2022 Nov;44(11):8036-8047. doi: 10.1109/TPAMI.2021.3107796. Epub 2022 Oct 4.
6
Differentially Private Distributed Online Learning.差分隐私分布式在线学习
IEEE Trans Knowl Data Eng. 2018 Aug;30(8):1440-1453. doi: 10.1109/TKDE.2018.2794384. Epub 2018 Jan 17.
7
Online Learning Algorithm for Distributed Convex Optimization With Time-Varying Coupled Constraints and Bandit Feedback.具有时变耦合约束和博弈反馈的分布式凸优化在线学习算法
IEEE Trans Cybern. 2022 Feb;52(2):1009-1020. doi: 10.1109/TCYB.2020.2990796. Epub 2022 Feb 16.
8
An Uplink Communication-Efficient Approach to Featurewise Distributed Sparse Optimization With Differential Privacy.
IEEE Trans Neural Netw Learn Syst. 2021 Oct;32(10):4529-4543. doi: 10.1109/TNNLS.2020.3020955. Epub 2021 Oct 5.
9
Distributed Online Learning Algorithm for Noncooperative Games Over Unbalanced Digraphs.非平衡有向图上非合作博弈的分布式在线学习算法
IEEE Trans Neural Netw Learn Syst. 2024 Nov;35(11):15846-15856. doi: 10.1109/TNNLS.2023.3290049. Epub 2024 Oct 29.
10
Privacy-Preserving Distributed ADMM With Event-Triggered Communication.具有事件触发通信的隐私保护分布式交替方向乘子法
IEEE Trans Neural Netw Learn Syst. 2024 Feb;35(2):2835-2847. doi: 10.1109/TNNLS.2022.3192346. Epub 2024 Feb 5.

引用本文的文献

1
Exploiting high-quality reconstruction image encryption strategy by optimized orthogonal compressive sensing.通过优化正交压缩感知开发高质量重建图像加密策略。
Sci Rep. 2024 Apr 16;14(1):8805. doi: 10.1038/s41598-024-59277-z.