Ren Han, Zhao Yajie, Zhang Yong, Sun Wei
Laboratory of Language Engineering and Computing, Guangdong University of Foreign Studies, Guangzhou, China.
Laboratory of Language and Artificial Intelligence, Guangdong University of Foreign Studies, Guangzhou, China.
PeerJ Comput Sci. 2024 Apr 23;10:e2005. doi: 10.7717/peerj-cs.2005. eCollection 2024.
Training with soft labels instead of hard labels can effectively improve the robustness and generalization of deep learning models. Label smoothing often provides uniformly distributed soft labels during the training process, whereas it does not take the semantic difference of labels into account. This article introduces discrimination-aware label smoothing, an adaptive label smoothing approach that learns appropriate distributions of labels for iterative optimization objectives. In this approach, positive and negative samples are employed to provide experience from both sides, and the performances of regularization and model calibration are improved through an iterative learning method. Experiments on five text classification datasets demonstrate the effectiveness of the proposed method.
使用软标签而非硬标签进行训练可以有效提高深度学习模型的鲁棒性和泛化能力。标签平滑在训练过程中通常会提供均匀分布的软标签,然而它没有考虑标签的语义差异。本文介绍了一种判别感知标签平滑方法,这是一种自适应标签平滑方法,它为迭代优化目标学习合适的标签分布。在这种方法中,利用正样本和负样本从两方面提供经验,并通过迭代学习方法提高正则化和模型校准的性能。在五个文本分类数据集上的实验证明了该方法的有效性。