Bhanbhro Jamsher, Nisticò Simona, Palopoli Luigi
DIMES, University of Calabria, 87036, Rende, Italy.
Sci Rep. 2024 Dec 2;14(1):29881. doi: 10.1038/s41598-024-81732-0.
The growing need for data privacy and security in machine learning has led to exploring novel approaches like federated learning (FL) that allow collaborative training on distributed datasets, offering a decentralized alternative to traditional data collection methods. A prime benefit of FL is its emphasis on privacy, enabling data to stay on local devices by moving models instead of data. Despite its pioneering nature, FL faces issues such as diversity in data types, model complexity, privacy concerns, and the need for efficient resource distribution. This paper illustrates an empirical analysis of these challenges within specially designed scenarios, each aimed at studying a specific problem. In particular, differently from existing literature, we isolate the issues that can arise in an FL framework to observe their nature without the interference of external factors.
机器学习中对数据隐私和安全的需求日益增长,促使人们探索诸如联邦学习(FL)之类的新方法,这些方法允许在分布式数据集上进行协作训练,为传统数据收集方法提供了一种去中心化的替代方案。联邦学习的一个主要优点是它对隐私的重视,通过移动模型而不是数据,使数据能够保留在本地设备上。尽管联邦学习具有开创性,但它面临着诸如数据类型多样性、模型复杂性、隐私问题以及高效资源分配需求等问题。本文阐述了在专门设计的场景中对这些挑战的实证分析,每个场景旨在研究一个特定问题。特别是,与现有文献不同的是,我们隔离了联邦学习框架中可能出现的问题,以便在不受外部因素干扰的情况下观察其本质。