IEEE Trans Pattern Anal Mach Intell. 2022 Feb;44(2):1002-1019. doi: 10.1109/TPAMI.2020.3015859. Epub 2022 Jan 7.
Multi-task learning (MTL) refers to the paradigm of learning multiple related tasks together. In contrast, in single-task learning (STL) each individual task is learned independently. MTL often leads to better trained models because they can leverage the commonalities among related tasks. However, because MTL algorithms can "leak" information from different models across different tasks, MTL poses a potential security risk. Specifically, an adversary may participate in the MTL process through one task and thereby acquire the model information for another task. The previously proposed privacy-preserving MTL methods protect data instances rather than models, and some of them may underperform in comparison with STL methods. In this paper, we propose a privacy-preserving MTL framework to prevent information from each model leaking to other models based on a perturbation of the covariance matrix of the model matrix. We study two popular MTL approaches for instantiation, namely, learning the low-rank and group-sparse patterns of the model matrix. Our algorithms can be guaranteed not to underperform compared with STL methods. We build our methods based upon tools for differential privacy, and privacy guarantees, utility bounds are provided, and heterogeneous privacy budgets are considered. The experiments demonstrate that our algorithms outperform the baseline methods constructed by existing privacy-preserving MTL methods on the proposed model-protection problem.
多任务学习(MTL)是指同时学习多个相关任务的范例。相比之下,在单任务学习(STL)中,每个单独的任务都是独立学习的。MTL 通常会导致更好的训练模型,因为它们可以利用相关任务之间的共同点。然而,由于 MTL 算法可以从不同任务的不同模型中“泄露”信息,因此 MTL 存在潜在的安全风险。具体来说,攻击者可能通过一个任务参与 MTL 过程,从而获取另一个任务的模型信息。以前提出的隐私保护 MTL 方法保护数据实例而不是模型,并且与 STL 方法相比,其中一些方法可能表现不佳。在本文中,我们提出了一种隐私保护 MTL 框架,通过对模型矩阵的协方差矩阵进行扰动,防止来自每个模型的信息泄露到其他模型。我们研究了两种流行的 MTL 实例化方法,即学习模型矩阵的低秩和分组稀疏模式。我们的算法可以保证不会逊于 STL 方法。我们基于差分隐私工具构建了我们的方法,并提供了隐私保证和效用界限,并考虑了异构的隐私预算。实验表明,我们的算法在提出的模型保护问题上优于基于现有隐私保护 MTL 方法构建的基线方法。