Boston Children's Hospital, Boston, MA, United States; Harvard Medical School, Boston, MA, United States.
Johns Hopkins University, Baltimore, MD, United States.
Artif Intell Med. 2024 Sep;155:102936. doi: 10.1016/j.artmed.2024.102936. Epub 2024 Jul 25.
Federated learning enables training models on distributed, privacy-sensitive medical imaging data. However, data heterogeneity across participating institutions leads to reduced model performance and fairness issues, especially for underrepresented datasets. To address these challenges, we propose leveraging the multi-head attention mechanism in Vision Transformers to align the representations of heterogeneous data across clients. By focusing on the attention mechanism as the alignment objective, our approach aims to improve both the accuracy and fairness of federated learning models in medical imaging applications. We evaluate our method on the IQ-OTH/NCCD Lung Cancer dataset, simulating various levels of data heterogeneity using Latent Dirichlet Allocation (LDA). Our results demonstrate that our approach achieves competitive performance compared to state-of-the-art federated learning methods across different heterogeneity levels and improves the performance of models for underrepresented clients, promoting fairness in the federated learning setting. These findings highlight the potential of leveraging the multi-head attention mechanism to address the challenges of data heterogeneity in medical federated learning.
联邦学习可用于在分布式的、对隐私敏感的医学成像数据上训练模型。然而,参与机构之间的数据异质性导致模型性能降低和公平性问题,尤其是在代表性不足的数据集上。为了解决这些挑战,我们提出利用 Vision Transformers 中的多头注意力机制来对齐客户端之间异构数据的表示。通过将注意力机制作为对齐目标,我们的方法旨在提高医学成像应用中联邦学习模型的准确性和公平性。我们在 IQ-OTH/NCCD 肺癌数据集上评估了我们的方法,使用潜在狄利克雷分配(LDA)模拟不同程度的数据异质性。我们的结果表明,与不同异质性水平的最先进的联邦学习方法相比,我们的方法具有竞争力,并提高了代表性不足的客户模型的性能,促进了联邦学习环境中的公平性。这些发现强调了利用多头注意力机制来解决医学联邦学习中数据异质性挑战的潜力。