School of Software Technology, Dalian University of Technology, Dalian, Liaoning, China.
PLoS One. 2024 Apr 17;19(4):e0301897. doi: 10.1371/journal.pone.0301897. eCollection 2024.
With the continuous development of vehicular ad hoc networks (VANET) security, using federated learning (FL) to deploy intrusion detection models in VANET has attracted considerable attention. Compared to conventional centralized learning, FL retains local training private data, thus protecting privacy. However, sensitive information about the training data can still be inferred from the shared model parameters in FL. Differential privacy (DP) is sophisticated technique to mitigate such attacks. A key challenge of implementing DP in FL is that non-selectively adding DP noise can adversely affect model accuracy, while having many perturbed parameters also increases privacy budget consumption and communication costs for detection models. To address this challenge, we propose FFIDS, a FL algorithm integrating model parameter pruning with differential privacy. It employs a parameter pruning technique based on the Fisher Information Matrix to reduce the privacy budget consumption per iteration while ensuring no accuracy loss. Specifically, FFIDS evaluates parameter importance and prunes unimportant parameters to generate compact sub-models, while recording the positions of parameters in each sub-model. This not only reduces model size to lower communication costs, but also maintains accuracy stability. DP noise is then added to the sub-models. By not perturbing unimportant parameters, more budget can be reserved to retain important parameters for more iterations. Finally, the server can promptly recover the sub-models using the parameter position information and complete aggregation. Extensive experiments on two public datasets and two F2MD simulation datasets have validated the utility and superior performance of the FFIDS algorithm.
随着车联网(VANET)安全的不断发展,使用联邦学习(FL)在 VANET 中部署入侵检测模型引起了相当大的关注。与传统的集中式学习相比,FL 保留了本地训练的私有数据,从而保护了隐私。然而,从 FL 中共享的模型参数仍可以推断出有关训练数据的敏感信息。差分隐私(DP)是一种复杂的技术,可以减轻此类攻击。在 FL 中实施 DP 的一个关键挑战是,非选择性地添加 DP 噪声会不利地影响模型准确性,而具有许多受扰参数也会增加检测模型的隐私预算消耗和通信成本。为了解决这个挑战,我们提出了 FFIDS,这是一种将模型参数修剪与差分隐私集成的 FL 算法。它采用基于 Fisher 信息矩阵的参数修剪技术,在确保无精度损失的情况下,减少每个迭代的隐私预算消耗。具体来说,FFIDS 评估参数的重要性并修剪不重要的参数,以生成紧凑的子模型,同时记录每个子模型中参数的位置。这不仅降低了模型大小,从而降低了通信成本,而且还保持了准确性的稳定性。然后向子模型添加 DP 噪声。通过不扰动不重要的参数,可以保留更多的预算来保留重要参数进行更多的迭代。最后,服务器可以使用参数位置信息迅速恢复子模型并完成聚合。在两个公共数据集和两个 F2MD 模拟数据集上进行的广泛实验验证了 FFIDS 算法的实用性和优越性能。