Samsung Advanced Institute Of Technology, Samsung Electronics, Suwon, 16678, South Korea.
The Department of Mathematics, Inha University, Incheon, 22212, South Korea.
Neural Netw. 2021 Nov;143:209-217. doi: 10.1016/j.neunet.2021.06.012. Epub 2021 Jun 12.
Most deep neural networks (DNNs) are trained with large amounts of noisy labels when they are applied. As DNNs have the high capacity to fit any noisy labels, it is known to be difficult to train DNNs robustly with noisy labels. These noisy labels cause the performance degradation of DNNs due to the memorization effect by over-fitting. Earlier state-of-the-art methods used small loss tricks to efficiently resolve the robust training problem with noisy labels. In this paper, relationship between the uncertainties and the clean labels is analyzed. We present novel training method to use not only small loss trick but also labels that are likely to be clean labels selected from uncertainty called "Uncertain Aware Co-Training (UACT)". Our robust learning techniques (UACT) avoid over-fitting the DNNs by extremely noisy labels. By making better use of the uncertainty acquired from the network itself, we achieve good generalization performance. We compare the proposed method to the current state-of-the-art algorithms for noisy versions of MNIST, CIFAR-10, CIFAR-100, T-ImageNet and News to demonstrate its excellence.
当应用于大量带噪标签的医学学术文献时,大多数深度神经网络(DNN)都需要进行训练。由于 DNN 具有拟合任何带噪标签的高能力,因此很难用带噪标签进行稳健的 DNN 训练。这些带噪标签会导致 DNN 性能下降,因为过度拟合会产生记忆效应。早期的最先进方法使用小损失技巧来有效地解决带噪标签的稳健训练问题。在本文中,分析了不确定性与干净标签之间的关系。我们提出了一种新的训练方法,不仅可以使用小损失技巧,还可以使用从不确定性中选择的可能干净标签,称为“不确定感知协同训练(UACT)”。我们的稳健学习技术(UACT)通过非常嘈杂的标签避免了 DNN 的过度拟合。通过更好地利用从网络本身获得的不确定性,我们实现了良好的泛化性能。我们将所提出的方法与当前最先进的用于 MNIST、CIFAR-10、CIFAR-100、T-ImageNet 和 News 的噪声版本的算法进行比较,以证明其优越性。