Suppr超能文献

用于安全医疗数据共享的可解释联邦学习方案。

Explainable federated learning scheme for secure healthcare data sharing.

作者信息

Zhao Liutao, Xie Haoran, Zhong Lin, Wang Yujue

机构信息

Beijing Academy of Science and Technology, Beijing Computing Center Company Ltd., Beijing, China.

School of Intelligent Systems Engineering, Sun Yat-sen University, Shenzhen, China.

出版信息

Health Inf Sci Syst. 2024 Sep 13;12(1):49. doi: 10.1007/s13755-024-00306-6. eCollection 2024 Dec.

Abstract

Artificial intelligence has immense potential for applications in smart healthcare. Nowadays, a large amount of medical data collected by wearable or implantable devices has been accumulated in Body Area Networks. Unlocking the value of this data can better explore the applications of artificial intelligence in the smart healthcare field. To utilize these dispersed data, this paper proposes an innovative Federated Learning scheme, focusing on the challenges of explainability and security in smart healthcare. In the proposed scheme, the federated modeling process and explainability analysis are independent of each other. By introducing post-hoc explanation techniques to analyze the global model, the scheme avoids the performance degradation caused by pursuing explainability while understanding the mechanism of the model. In terms of security, firstly, a fair and efficient client private gradient evaluation method is introduced for explainable evaluation of gradient contributions, quantifying client contributions in federated learning and filtering the impact of low-quality data. Secondly, to address the privacy issues of medical health data collected by wireless Body Area Networks, a multi-server model is proposed to solve the secure aggregation problem in federated learning. Furthermore, by employing homomorphic secret sharing and homomorphic hashing techniques, a non-interactive, verifiable secure aggregation protocol is proposed, ensuring that client data privacy is protected and the correctness of the aggregation results is maintained even in the presence of up to colluding malicious servers. Experimental results demonstrate that the proposed scheme's explainability is consistent with that of centralized training scenarios and shows competitive performance in terms of security and efficiency.

摘要

人工智能在智能医疗保健领域具有巨大的应用潜力。如今,可穿戴或植入式设备收集的大量医疗数据已积累在人体区域网络中。挖掘这些数据的价值能够更好地探索人工智能在智能医疗保健领域的应用。为了利用这些分散的数据,本文提出了一种创新的联邦学习方案,重点关注智能医疗保健中的可解释性和安全性挑战。在所提出的方案中,联邦建模过程和可解释性分析相互独立。通过引入事后解释技术来分析全局模型,该方案在理解模型机制的同时避免了因追求可解释性而导致的性能下降。在安全性方面,首先,引入一种公平高效的客户端私有梯度评估方法,用于对梯度贡献进行可解释性评估,量化联邦学习中客户端的贡献并过滤低质量数据的影响。其次,为解决无线人体区域网络收集的医疗健康数据的隐私问题,提出了一种多服务器模型来解决联邦学习中的安全聚合问题。此外,通过采用同态秘密共享和同态哈希技术,提出了一种非交互式、可验证的安全聚合协议,确保即使存在多达 个勾结的恶意服务器,客户端数据隐私也能得到保护且聚合结果的正确性得以维持。实验结果表明,所提出的方案的可解释性与集中式训练场景一致,并且在安全性和效率方面表现出具有竞争力的性能。

相似文献

1
Explainable federated learning scheme for secure healthcare data sharing.用于安全医疗数据共享的可解释联邦学习方案。
Health Inf Sci Syst. 2024 Sep 13;12(1):49. doi: 10.1007/s13755-024-00306-6. eCollection 2024 Dec.

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验