IEEE Trans Med Imaging. 2023 Oct;42(10):2948-2960. doi: 10.1109/TMI.2023.3270140. Epub 2023 Oct 2.
Federated learning is an emerging paradigm allowing large-scale decentralized learning without sharing data across different data owners, which helps address the concern of data privacy in medical image analysis. However, the requirement for label consistency across clients by the existing methods largely narrows its application scope. In practice, each clinical site may only annotate certain organs of interest with partial or no overlap with other sites. Incorporating such partially labeled data into a unified federation is an unexplored problem with clinical significance and urgency. This work tackles the challenge by using a novel federated multi-encoding U-Net (Fed-MENU) method for multi-organ segmentation. In our method, a multi-encoding U-Net (MENU-Net) is proposed to extract organ-specific features through different encoding sub-networks. Each sub-network can be seen as an expert of a specific organ and trained for that client. Moreover, to encourage the organ-specific features extracted by different sub-networks to be informative and distinctive, we regularize the training of the MENU-Net by designing an auxiliary generic decoder (AGD). Extensive experiments on six public abdominal CT datasets show that our Fed-MENU method can effectively obtain a federated learning model using the partially labeled datasets with superior performance to other models trained by either localized or centralized learning methods. Source code is publicly available at https://github.com/DIAL-RPI/Fed-MENU.
联邦学习是一种新兴的范例,允许在不同数据所有者之间不共享数据的大规模去中心化学习,这有助于解决医学图像分析中数据隐私的问题。然而,现有方法对客户端标签一致性的要求极大地限制了其应用范围。在实践中,每个临床站点可能只对某些感兴趣的器官进行注释,与其他站点部分重叠或没有重叠。将这些部分标记的数据纳入统一的联邦是一个具有临床意义和紧迫性的未被探索的问题。这项工作通过使用一种新颖的联邦多编码 U-Net(Fed-MENU)方法来解决多器官分割的挑战。在我们的方法中,提出了一种多编码 U-Net(MENU-Net),通过不同的编码子网络提取器官特异性特征。每个子网络可以被视为特定器官的专家,并针对该客户端进行训练。此外,为了鼓励不同子网络提取的器官特异性特征具有信息量和独特性,我们通过设计辅助通用解码器(AGD)来正则化 MENU-Net 的训练。在六个公共腹部 CT 数据集上的广泛实验表明,我们的 Fed-MENU 方法可以有效地利用部分标记数据集获得联邦学习模型,其性能优于其他通过局部或集中学习方法训练的模型。源代码可在 https://github.com/DIAL-RPI/Fed-MENU 上获得。