Su Dan, Jin Long, Wang Jun
School of Information Science and Engineering, Lanzhou University, Lanzhou, China; School of Automation, Central South University, Changsha, China.
School of Information Science and Engineering, Lanzhou University, Lanzhou, China; Department of Computer Science, City University of Hong Kong, Kowloon, Hong Kong.
Neural Netw. 2025 Jan;181:106829. doi: 10.1016/j.neunet.2024.106829. Epub 2024 Oct 24.
Sharpness-aware minimization (SAM) aims to enhance model generalization by minimizing the sharpness of the loss function landscape, leading to a robust model performance. To protect sensitive information and enhance privacy, prevailing approaches add noise to models. However, additive noises would inevitably degrade the generalization and robustness of the model. In this paper, we propose a noise-resistant SAM method, based on a noise-resistant parameter update rule. We analyze the convergence and noise resistance properties of the proposed method under noisy conditions. We elaborate on experimental results with several networks on various benchmark datasets to demonstrate the advantages of the proposed method with respect to model generalization and privacy protection.
锐度感知最小化(SAM)旨在通过最小化损失函数景观的锐度来增强模型泛化能力,从而实现稳健的模型性能。为了保护敏感信息并增强隐私性,现有的方法会给模型添加噪声。然而,加性噪声不可避免地会降低模型的泛化能力和鲁棒性。在本文中,我们基于一种抗噪声参数更新规则,提出了一种抗噪声SAM方法。我们分析了该方法在噪声条件下的收敛性和抗噪声特性。我们详细阐述了在多个基准数据集上使用多个网络的实验结果,以证明所提方法在模型泛化和隐私保护方面的优势。