School of Computer and Information Engineering, Henan University, Kaifeng 475001, China.
School of Mathematics and Computer Science, Nanchang University, Nanchang 330031, China.
Sensors (Basel). 2022 Nov 3;22(21):8475. doi: 10.3390/s22218475.
With the development of the Internet of things (IoT), federated learning (FL) has received increasing attention as a distributed machine learning (ML) framework that does not require data exchange. However, current FL frameworks follow an idealized setup in which the task size is fixed and the storage space is unlimited, which is impossible in the real world. In fact, new classes of these participating clients always emerge over time, and some samples are overwritten or discarded due to storage limitations. We urgently need a new framework to adapt to the dynamic task sequences and strict storage constraints in the real world. Continuous learning or incremental learning is the ultimate goal of deep learning, and we introduce incremental learning into FL to describe a new federated learning framework. New generation federated learning (NGFL) is probably the most desirable framework for FL, in which, in addition to the basic task of training the server, each client needs to learn its private tasks, which arrive continuously independent of communication with the server. We give a rigorous mathematical representation of this framework, detail several major challenges faced under this framework, and address the main challenges of combining incremental learning with federated learning (aggregation of heterogeneous output layers and the task transformation mutual knowledge problem), and show the lower and upper baselines of the framework.
随着物联网(IoT)的发展,联邦学习(FL)作为一种不需要数据交换的分布式机器学习(ML)框架,受到了越来越多的关注。然而,当前的 FL 框架遵循的是一种理想化的设置,其中任务大小是固定的,存储空间是无限的,这在现实世界中是不可能的。事实上,随着时间的推移,这些参与的客户端总会出现新的类别,由于存储限制,一些样本会被覆盖或丢弃。我们迫切需要一个新的框架来适应现实世界中动态的任务序列和严格的存储约束。连续学习或增量学习是深度学习的最终目标,我们将增量学习引入到 FL 中,以描述一个新的联邦学习框架。新一代联邦学习(NGFL)可能是 FL 最理想的框架,在这个框架中,除了训练服务器的基本任务外,每个客户端还需要学习其私人任务,这些任务独立于与服务器的通信而连续到达。我们对这个框架进行了严格的数学表示,详细描述了在这个框架下面临的几个主要挑战,并解决了将增量学习与联邦学习相结合的主要挑战(异构输出层的聚合和任务转换的相互知识问题),并展示了该框架的下限和上限基准。