Suppr超能文献

具有隐私保护的跨多模态数据的多任务联邦分割学习

Multi-Task Federated Split Learning Across Multi-Modal Data with Privacy Preservation.

作者信息

Dong Yipeng, Luo Wei, Wang Xiangyang, Zhang Lei, Xu Lin, Zhou Zehao, Wang Lulu

机构信息

State Key Laboratory of Intelligent Vehicle Safety Technology, Chongqing 401133, China.

Shanghai Key Laboratory of Trustworthy Computing, Software Engineering Institute, East China Normal University, Shanghai 200062, China.

出版信息

Sensors (Basel). 2025 Jan 3;25(1):233. doi: 10.3390/s25010233.

Abstract

With the advancement of federated learning (FL), there is a growing demand for schemes that support multi-task learning on multi-modal data while ensuring robust privacy protection, especially in applications like intelligent connected vehicles. Traditional FL schemes often struggle with the complexities introduced by multi-modal data and diverse task requirements, such as increased communication overhead and computational burdens. In this paper, we propose a novel privacy-preserving scheme for multi-task federated split learning across multi-modal data (MTFSLaMM). Our approach leverages the principles of split learning to partition models between clients and servers, employing a modular design that reduces computational demands on resource-constrained clients. To ensure data privacy, we integrate differential privacy to protect intermediate data and employ homomorphic encryption to safeguard client models. Additionally, our scheme employs an optimized attention mechanism guided by mutual information to achieve efficient multi-modal data fusion, maximizing information integration while minimizing computational overhead and preventing overfitting. Experimental results demonstrate the effectiveness of the proposed scheme in addressing the challenges of multi-modal data and multi-task learning while offering robust privacy protection, with MTFSLaMM achieving a 15.3% improvement in BLEU-4 and an 11.8% improvement in CIDEr scores compared with the baseline.

摘要

随着联邦学习(FL)的发展,对于能够在多模态数据上支持多任务学习同时确保强大隐私保护的方案的需求日益增长,尤其是在智能网联汽车等应用中。传统的联邦学习方案常常难以应对多模态数据和多样化任务需求带来的复杂性,比如通信开销增加和计算负担加重。在本文中,我们提出了一种新颖的跨多模态数据的多任务联邦分割学习隐私保护方案(MTFSLaMM)。我们的方法利用分割学习的原理在客户端和服务器之间划分模型,采用模块化设计以减少对资源受限客户端的计算需求。为确保数据隐私,我们集成差分隐私来保护中间数据,并采用同态加密来保护客户端模型。此外,我们的方案采用由互信息引导的优化注意力机制来实现高效的多模态数据融合,在最大化信息整合的同时最小化计算开销并防止过拟合。实验结果表明,所提出的方案在应对多模态数据和多任务学习挑战的同时提供了强大的隐私保护,与基线相比,MTFSLaMM在BLEU-4得分上提高了15.3%,在CIDEr得分上提高了11.8%。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4f38/11723476/c1bd0c5e8d17/sensors-25-00233-g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验