Zhang Youshan, Porter Ian R, Wieland Matthias, Basran Parminder S
Department of Clinical Sciences, College of Veterinary Medicine, Cornell University, Ithaca, NY 14853, USA.
Department of Population Medicine and Diagnostic Sciences, College of Veterinary Medicine, Cornell University, Ithaca, NY 14853, USA.
Animals (Basel). 2022 Mar 31;12(7):886. doi: 10.3390/ani12070886.
Teat-end health assessments are crucial to maintain milk quality and dairy cow health. One approach to automate teat-end health assessments is by using a convolutional neural network to classify the magnitude of teat-end alterations based on digital images. This approach has been demonstrated as feasible with GoogLeNet but there remains a number of challenges, such as low performance and comparing performance with different ImageNet models. In this paper, we present a separable confident transductive learning (SCTL) model to improve the performance of teat-end image classification. First, we propose a separation loss to ameliorate the inter-class dispersion. Second, we generate high confident pseudo labels to optimize the network. We further employ transductive learning to narrow the gap between training and test datasets with categorical maximum mean discrepancy loss. Experimental results demonstrate that the proposed SCTL model consistently achieves higher accuracy across all seventeen different ImageNet models when compared with retraining of original approaches.
乳头末端健康评估对于维持牛奶质量和奶牛健康至关重要。一种实现乳头末端健康评估自动化的方法是使用卷积神经网络,根据数字图像对乳头末端病变的程度进行分类。谷歌网络已证明这种方法是可行的,但仍存在一些挑战,如性能较低以及与不同的ImageNet模型进行性能比较。在本文中,我们提出了一种可分离的置信转导学习(SCTL)模型来提高乳头末端图像分类的性能。首先,我们提出了一种分离损失来改善类间离散度。其次,我们生成高置信度的伪标签来优化网络。我们进一步采用转导学习,通过分类最大均值差异损失来缩小训练数据集和测试数据集之间的差距。实验结果表明,与原始方法的重新训练相比,所提出的SCTL模型在所有十七种不同的ImageNet模型上始终能实现更高的准确率。