Suppr超能文献

隐私约束下学习与推理的去中心化成本的信息论分析

An Information-Theoretic Analysis of the Cost of Decentralization for Learning and Inference under Privacy Constraints.

作者信息

Jose Sharu Theresa, Simeone Osvaldo

机构信息

Department of Engineering, King's College London, London WC2R 2LS, UK.

出版信息

Entropy (Basel). 2022 Mar 30;24(4):485. doi: 10.3390/e24040485.

Abstract

In vertical federated learning (FL), the features of a data sample are distributed across multiple agents. As such, inter-agent collaboration can be beneficial not only during the learning phase, as is the case for standard horizontal FL, but also during the inference phase. A fundamental theoretical question in this setting is how to quantify the cost, or performance loss, of decentralization for learning and/or inference. In this paper, we study general supervised learning problems with any number of agents, and provide a novel information-theoretic quantification of the cost of decentralization in the presence of privacy constraints on inter-agent communication within a Bayesian framework. The cost of decentralization for learning and/or inference is shown to be quantified in terms of conditional mutual information terms involving features and label variables.

摘要

在垂直联邦学习(FL)中,数据样本的特征分布在多个智能体之间。因此,智能体间的协作不仅在学习阶段(如标准水平联邦学习那样)有益,在推理阶段也同样有益。这种情况下的一个基本理论问题是如何量化学习和/或推理去中心化的成本或性能损失。在本文中,我们研究了任意数量智能体的一般监督学习问题,并在贝叶斯框架下,针对智能体间通信存在隐私约束的情况,给出了一种新颖的信息论方法来量化去中心化的成本。学习和/或推理去中心化的成本被证明可以根据涉及特征和标签变量的条件互信息项来量化。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9495/9030603/1853896fa354/entropy-24-00485-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验