IEEE Trans Pattern Anal Mach Intell. 2023 Apr;45(4):4521-4536. doi: 10.1109/TPAMI.2022.3195956. Epub 2023 Mar 7.
Federated learning models are collaboratively developed upon valuable training data owned by multiple parties. During the development and deployment of federated models, they are exposed to risks including illegal copying, re-distribution, misuse and/or free-riding. To address these risks, the ownership verification of federated learning models is a prerequisite that protects federated learning model intellectual property rights (IPR) i.e., FedIPR. We propose a novel federated deep neural network (FedDNN) ownership verification scheme that allows private watermarks to be embedded and verified to claim legitimate IPR of FedDNN models. In the proposed scheme, each client independently verifies the existence of the model watermarks and claims respective ownership of the federated model without disclosing neither private training data nor private watermark information. The effectiveness of embedded watermarks is theoretically justified by the rigorous analysis of conditions under which watermarks can be privately embedded and detected by multiple clients. Moreover, extensive experimental results on computer vision and natural language processing tasks demonstrate that varying bit-length watermarks can be embedded and reliably detected without compromising original model performances. Our watermarking scheme is also resilient to various federated training settings and robust against removal attacks.
联邦学习模型是在多个主体拥有的有价值的训练数据上共同开发的。在联邦模型的开发和部署过程中,它们面临着包括非法复制、重新分发、滥用和/或搭便车等风险。为了解决这些风险,联邦学习模型的所有权验证是保护联邦学习模型知识产权(IPR)的前提条件,即 FedIPR。我们提出了一种新颖的联邦深度学习网络(FedDNN)所有权验证方案,允许嵌入和验证私有水印,以主张 FedDNN 模型的合法 IPR。在提出的方案中,每个客户端都可以独立验证模型水印的存在,并声称对联邦模型的所有权,而无需披露私有训练数据或私有水印信息。通过对水印可以由多个客户端私有嵌入和检测的条件进行严格的分析,从理论上证明了嵌入水印的有效性。此外,在计算机视觉和自然语言处理任务上的广泛实验结果表明,可以嵌入和可靠地检测不同位长的水印,而不会影响原始模型的性能。我们的水印方案还能抵抗各种联邦学习设置,并能抵御去除攻击。