Zhang Lianyue, Han Gaoge, Qiao Yongliang, Xu Liu, Chen Ling, Tang Jinglei
College of Information Engineering, Northwest A&F University, Yangling, Xianyang 712100, China.
Australian Institute for Machine Learning (AIML), The University of Adelaide, Adelaide 5005, Australia.
Animals (Basel). 2023 Oct 18;13(20):3250. doi: 10.3390/ani13203250.
Semantic segmentation and instance segmentation based on deep learning play a significant role in intelligent dairy goat farming. However, these algorithms require a large amount of pixel-level dairy goat image annotations for model training. At present, users mainly use Labelme for pixel-level annotation of images, which makes it quite inefficient and time-consuming to obtain a high-quality annotation result. To reduce the annotation workload of dairy goat images, we propose a novel interactive segmentation model called UA-MHFF-DeepLabv3+, which employs layer-by-layer multi-head feature fusion (MHFF) and upsampling attention (UA) to improve the segmentation accuracy of the DeepLabv3+ on object boundaries and small objects. Experimental results show that our proposed model achieved state-of-the-art segmentation accuracy on the validation set of DGImgs compared with four previous state-of-the-art interactive segmentation models, and obtained 1.87 and 4.11 on mNoC@85 and mNoC@90, which are significantly lower than the best performance of the previous models of 3 and 5. Furthermore, to promote the implementation of our proposed algorithm, we design and develop a dairy goat image-annotation system named DGAnnotation for pixel-level annotation of dairy goat images. After the test, we found that it just takes 7.12 s to annotate a dairy goat instance with our developed DGAnnotation, which is five times faster than Labelme.
基于深度学习的语义分割和实例分割在智能奶山羊养殖中发挥着重要作用。然而,这些算法在模型训练时需要大量像素级奶山羊图像标注。目前,用户主要使用Labelme进行图像的像素级标注,这使得获得高质量的标注结果效率很低且耗时。为了减少奶山羊图像的标注工作量,我们提出了一种名为UA-MHFF-DeepLabv3+的新型交互式分割模型,该模型采用逐层多头特征融合(MHFF)和上采样注意力(UA)来提高DeepLabv3+在物体边界和小物体上的分割精度。实验结果表明,与之前四个最先进的交互式分割模型相比,我们提出的模型在DGImgs验证集上实现了最先进的分割精度,在mNoC@85和mNoC@90上分别获得了1.87和4.11,显著低于之前模型的最佳性能3和5。此外,为了推动我们提出的算法的实施,我们设计并开发了一个名为DGAnnotation的奶山羊图像标注系统,用于奶山羊图像的像素级标注。经过测试,我们发现使用我们开发的DGAnnotation标注一个奶山羊实例只需要7.12秒,比Labelme快五倍。