Suppr超能文献

一种用于医学图像数据的分布式信息共享联邦学习方法。

A distribution information sharing federated learning approach for medical image data.

作者信息

Zhao Leiyang, Huang Jianjun

机构信息

Guangdong Key Laboratory of Intelligent Information Processing, College of Electronics and Information Engineering, Shenzhen University, Shenzhen, China.

出版信息

Complex Intell Systems. 2023 Mar 29:1-12. doi: 10.1007/s40747-023-01035-1.

Abstract

In recent years, federated learning has been believed to play a considerable role in cross-silo scenarios (e.g., medical institutions) due to its privacy-preserving properties. However, the non-IID problem in federated learning between medical institutions is common, which degrades the performance of traditional federated learning algorithms. To overcome the performance degradation problem, a novelty distribution information sharing federated learning approach (FedDIS) to medical image classification is proposed that reduce non-IIDness across clients by generating data locally at each client with shared medical image data distribution from others while protecting patient privacy. First, a variational autoencoder (VAE) is federally trained, of which the encoder is uesd to map the local original medical images into a hidden space, and the distribution information of the mapped data in the hidden space is estimated and then shared among the clients. Second, the clients augment a new set of image data based on the received distribution information with the decoder of VAE. Finally, the clients use the local dataset along with the augmented dataset to train the final classification model in a federated learning manner. Experiments on the diagnosis task of Alzheimer's disease MRI dataset and the MNIST data classification task show that the proposed method can significantly improve the performance of federated learning under non-IID cases.

摘要

近年来,联邦学习因其隐私保护特性,被认为在跨机构场景(如医疗机构)中发挥着重要作用。然而,医疗机构间联邦学习中的非独立同分布问题较为常见,这会降低传统联邦学习算法的性能。为克服性能下降问题,本文提出一种用于医学图像分类的新颖的分布信息共享联邦学习方法(FedDIS),该方法通过利用来自其他机构的共享医学图像数据分布在每个客户端本地生成数据,在保护患者隐私的同时减少客户端之间的非独立同分布性。首先,对变分自编码器(VAE)进行联邦训练,其编码器用于将本地原始医学图像映射到隐藏空间,估计隐藏空间中映射数据的分布信息,然后在客户端之间共享。其次,客户端使用VAE的解码器基于接收到的分布信息扩充一组新的图像数据。最后,客户端以联邦学习的方式使用本地数据集和扩充后的数据集训练最终的分类模型。在阿尔茨海默病MRI数据集诊断任务和MNIST数据分类任务上的实验表明,所提方法能在非独立同分布情况下显著提高联邦学习的性能。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验