Vanderbilt University, Nashville, TN.
Vanderbilt University Medical Center, Nashville, TN.
AMIA Annu Symp Proc. 2024 Jan 11;2023:1047-1056. eCollection 2023.
Deep learning continues to rapidly evolve and is now demonstrating remarkable potential for numerous medical prediction tasks. However, realizing deep learning models that generalize across healthcare organizations is challenging. This is due, in part, to the inherent siloed nature of these organizations and patient privacy requirements. To address this problem, we illustrate how split learning can enable collaborative training of deep learning models across disparate and privately maintained health datasets, while keeping the original records and model parameters private. We introduce a new privacy-preserving distributed learning framework that offers a higher level of privacy compared to conventional federated learning. We use several biomedical imaging and electronic health record (EHR) datasets to show that deep learning models trained via split learning can achieve highly similar performance to their centralized and federated counterparts while greatly improving computational efficiency and reducing privacy risks.
深度学习继续快速发展,现在在许多医学预测任务中展现出了显著的潜力。然而,要实现能够跨医疗保健机构泛化的深度学习模型是具有挑战性的。这在一定程度上是由于这些组织的固有隔离性质和患者隐私要求所致。为了解决这个问题,我们说明了如何通过拆分学习来跨不同的、私人维护的健康数据集进行深度学习模型的协作训练,同时保持原始记录和模型参数的隐私。我们引入了一个新的隐私保护分布式学习框架,与传统的联邦学习相比,它提供了更高水平的隐私保护。我们使用了几个生物医学成像和电子健康记录 (EHR) 数据集来表明,通过拆分学习训练的深度学习模型可以达到与集中式和联邦式模型非常相似的性能,同时大大提高了计算效率并降低了隐私风险。