Thorgeirsson Adam Thor, Gauterin Frank
Dr. Ing. h.c. F. Porsche AG, 71287 Weissach, Germany.
Institute of Vehicle System Technology, Karlsruhe Institute of Technology, 76131 Karlsruhe, Germany.
Entropy (Basel). 2020 Dec 30;23(1):41. doi: 10.3390/e23010041.
Probabilistic predictions with machine learning are important in many applications. These are commonly done with Bayesian learning algorithms. However, Bayesian learning methods are computationally expensive in comparison with non-Bayesian methods. Furthermore, the data used to train these algorithms are often distributed over a large group of end devices. Federated learning can be applied in this setting in a communication-efficient and privacy-preserving manner but does not include predictive uncertainty. To represent predictive uncertainty in federated learning, our suggestion is to introduce uncertainty in the aggregation step of the algorithm by treating the set of local weights as a posterior distribution for the weights of the global model. We compare our approach to state-of-the-art Bayesian and non-Bayesian probabilistic learning algorithms. By applying proper scoring rules to evaluate the predictive distributions, we show that our approach can achieve similar performance as the benchmark would achieve in a non-distributed setting.
机器学习的概率预测在许多应用中都很重要。这些通常通过贝叶斯学习算法来完成。然而,与非贝叶斯方法相比,贝叶斯学习方法的计算成本很高。此外,用于训练这些算法的数据通常分布在大量终端设备上。联邦学习可以以通信高效且保护隐私的方式应用于这种情况,但不包括预测不确定性。为了在联邦学习中表示预测不确定性,我们的建议是在算法的聚合步骤中引入不确定性,将局部权重集视为全局模型权重的后验分布。我们将我们的方法与最先进的贝叶斯和非贝叶斯概率学习算法进行比较。通过应用适当的评分规则来评估预测分布,我们表明我们的方法可以实现与基准在非分布式设置中所能达到的类似性能。