Lim Hyun Woo, Tanjung Sean Yonathan, Iwan Ignatius, Yahya Bernardo Nugroho, Lee Seok-Lyong
Department of Industrial and Management Engineering, Hankuk University of Foreign Studies, Yongin 17035, Republic of Korea.
Sensors (Basel). 2025 Jun 12;25(12):3687. doi: 10.3390/s25123687.
Federated learning (FL) is a decentralized approach that aims to establish a global model by aggregating updates from diverse clients without sharing their local data. However, the approach becomes complicated when Byzantine clients join with arbitrary manipulation, referred to as malicious clients. Classical techniques, such as Federated Averaging (FedAvg), are insufficient to incentivize reliable clients and discourage malicious clients. Other existing Byzantine FL schemes to address malicious clients are either incentive-reliable clients or need-to-provide server-labeled data as the public validation dataset, which increase time complexity. This study introduces a federated learning framework with an evaluator-based incentive mechanism (FedEach) that offers robustness with no dependency on server-labeled data. In this framework, we introduce evaluators and participants. Unlike the existing approaches, the server selects the evaluators and participants among the clients using model-based performance evaluation criteria such as test score and reputation. Afterward, the evaluators assess and evaluate whether a participant is reliable or malicious. Subsequently, the server exclusively aggregates models from these identified reliable participants and the evaluators for global model updates. After this aggregation, the server calculates each client's contribution, prioritizing each client's contribution to ensure the fair recognition of high-quality updates and penalizing malicious clients based on their contributions. Empirical evidence obtained from the performance in human activity recognition (HAR) datasets highlights FedEach's effectiveness, especially in environments with a high presence of malicious clients. In addition, FedEach maintains computational efficiency so that it is reliable for efficient FL applications such as sensor-based HAR with wearable devices and mobile sensing.
联邦学习(FL)是一种去中心化方法,旨在通过聚合来自不同客户端的更新来建立全局模型,而无需共享其本地数据。然而,当拜占庭客户端进行任意操纵(即恶意客户端)加入时,该方法会变得复杂。诸如联邦平均(FedAvg)等经典技术不足以激励可靠客户端并抑制恶意客户端。其他现有的解决恶意客户端的拜占庭联邦学习方案要么激励可靠客户端,要么需要提供服务器标记的数据作为公共验证数据集,这增加了时间复杂度。本研究引入了一种具有基于评估器的激励机制的联邦学习框架(FedEach),该框架无需依赖服务器标记的数据即可提供鲁棒性。在这个框架中,我们引入了评估器和参与者。与现有方法不同,服务器使用诸如测试分数和声誉等基于模型的性能评估标准在客户端中选择评估器和参与者。之后,评估器评估并判断一个参与者是可靠还是恶意。随后,服务器仅聚合来自这些已识别的可靠参与者和评估器的模型以进行全局模型更新。在这种聚合之后,服务器计算每个客户端的贡献,优先考虑每个客户端的贡献,以确保公平认可高质量更新,并根据恶意客户端的贡献对其进行惩罚。从人类活动识别(HAR)数据集的性能中获得的经验证据突出了FedEach的有效性,特别是在恶意客户端大量存在的环境中。此外,FedEach保持计算效率,因此对于诸如基于可穿戴设备和移动传感的基于传感器的HAR等高效联邦学习应用来说是可靠的。