Zhang Chen, Hu Xiongwei, Xie Yu, Gong Maoguo, Yu Bin
School of Computer Science and Technology, Xidian University, Xi'an, China.
Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, School of Electronic Engineering, Xidian University, Xi'an, China.
Front Neurorobot. 2020 Jan 14;13:112. doi: 10.3389/fnbot.2019.00112. eCollection 2019.
Recently, multi-task learning (MTL) has been extensively studied for various face processing tasks, including face detection, landmark localization, pose estimation, and gender recognition. This approach endeavors to train a better model by exploiting the synergy among the related tasks. However, the raw face dataset used for training often contains sensitive and private information, which can be maliciously recovered by carefully analyzing the model and outputs. To address this problem, we propose a novel privacy-preserving multi-task learning approach that utilizes the differential private stochastic gradient descent algorithm to optimize the end-to-end multi-task model and weighs the loss functions of multiple tasks to improve learning efficiency and prediction accuracy. Specifically, calibrated noise is added to the gradient of loss functions to preserve the privacy of the training data during model training. Furthermore, we exploit the homoscedastic uncertainty to balance different learning tasks. The experiments demonstrate that the proposed approach yields differential privacy guarantees without decreasing the accuracy of HyperFace under a desirable privacy budget.
最近,多任务学习(MTL)已被广泛研究用于各种面部处理任务,包括面部检测、地标定位、姿态估计和性别识别。这种方法致力于通过利用相关任务之间的协同作用来训练更好的模型。然而,用于训练的原始面部数据集通常包含敏感和私人信息,通过仔细分析模型和输出,这些信息可能会被恶意恢复。为了解决这个问题,我们提出了一种新颖的隐私保护多任务学习方法,该方法利用差分隐私随机梯度下降算法来优化端到端多任务模型,并对多个任务的损失函数进行加权,以提高学习效率和预测准确性。具体来说,在模型训练期间,将校准噪声添加到损失函数的梯度中以保护训练数据的隐私。此外,我们利用同方差不确定性来平衡不同的学习任务。实验表明,所提出的方法在理想的隐私预算下能够提供差分隐私保证,同时不会降低HyperFace的准确性。