Jayaweera Rasanga, Agrawal Himanshu, Karie Nickson M
School of Electrical Engineering, Computing and Mathematical Sciences, Curtin University, Bentley, WA 6102, Australia.
Sensors (Basel). 2025 Aug 17;25(16):5108. doi: 10.3390/s25165108.
Digital transformation in healthcare has introduced data privacy challenges, as hospitals struggle to protect patient information while adopting digital technologies such as AI, IoT, and cloud more rapidly than ever before. The adoption of powerful third-party Machine Learning as a Service (MLaaS) solutions for disease prediction has become a common practice. However, these solutions offer significant privacy risks when sensitive healthcare data are shared externally to a third-party server. This raises compliance concerns under regulations like HIPAA, GDPR, and Australia's Privacy Act. To address these challenges, this paper explores a decentralized, privacy-preserving approach to train the models among multiple healthcare stakeholders, integrating Federated Learning (FL) with Homomorphic Encryption (HE), ensuring model parameters remain protected throughout the learning process. This paper proposes a novel Homomorphic Encryption-based Adaptive Tuning for Federated Learning (HEAT-FL) framework to select encryption parameters based on model layer sensitivity. The proposed framework leverages the CKKS scheme to encrypt model parameters on the client side before sharing. This enables secure aggregation at the central server without requiring decryption, providing an additional layer of security through model-layer-wise parameter management. The proposed adaptive encryption approach significantly improves runtime efficiency while maintaining a balanced level of security. Compared to the existing frameworks (non-adaptive) using 256-bit security settings, the proposed framework offers a 56.5% reduction in encryption time for 10 clients and 54.6% for four clients per epoch.
医疗保健领域的数字转型带来了数据隐私挑战,因为医院在以前所未有的速度采用人工智能、物联网和云计算等数字技术的同时,还要努力保护患者信息。采用强大的第三方机器学习即服务(MLaaS)解决方案进行疾病预测已成为一种常见做法。然而,当敏感的医疗数据外部共享给第三方服务器时,这些解决方案会带来重大的隐私风险。这引发了对《健康保险流通与责任法案》(HIPAA)、《通用数据保护条例》(GDPR)和澳大利亚《隐私法》等法规的合规担忧。为应对这些挑战,本文探索了一种去中心化的、保护隐私的方法,用于在多个医疗利益相关者之间训练模型,将联邦学习(FL)与同态加密(HE)相结合,确保模型参数在整个学习过程中得到保护。本文提出了一种基于同态加密的联邦学习自适应调优(HEAT-FL)框架,以根据模型层敏感性选择加密参数。所提出的框架利用CKKS方案在共享前在客户端对模型参数进行加密。这使得中央服务器能够在无需解密的情况下进行安全聚合,通过按模型层进行参数管理提供了额外的安全层。所提出的自适应加密方法在保持平衡的安全水平的同时,显著提高了运行时效率。与使用256位安全设置的现有(非自适应)框架相比,所提出的框架在每个轮次中,对于10个客户端加密时间减少了56.5%,对于4个客户端减少了54.6%。