Gong He, Liu Jingyi, Li Zhipeng, Zhu Hang, Luo Lan, Li Haoxu, Hu Tianli, Guo Ying, Mu Ye
College of Information Technology, Jilin Agricultural University, Changchun 130118, China.
Jilin Province Agricultural Internet of Things Technology Collaborative Innovation Center, Changchun 130118, China.
Animals (Basel). 2024 Sep 11;14(18):2640. doi: 10.3390/ani14182640.
As the sika deer breeding industry flourishes on a large scale, accurately assessing the health of these animals is of paramount importance. Implementing posture recognition through target detection serves as a vital method for monitoring the well-being of sika deer. This approach allows for a more nuanced understanding of their physical condition, ensuring the industry can maintain high standards of animal welfare and productivity. In order to achieve remote monitoring of sika deer without interfering with the natural behavior of the animals, and to enhance animal welfare, this paper proposes a sika deer individual posture recognition detection algorithm GFI-YOLOv8 based on YOLOv8. Firstly, this paper proposes to add the iAFF iterative attention feature fusion module to the C2f of the backbone network module, replace the original SPPF module with AIFI module, and use the attention mechanism to adjust the feature channel adaptively. This aims to enhance granularity, improve the model's recognition, and enhance understanding of sika deer behavior in complex scenes. Secondly, a novel convolutional neural network module is introduced to improve the efficiency and accuracy of feature extraction, while preserving the model's depth and diversity. In addition, a new attention mechanism module is proposed to expand the receptive field and simplify the model. Furthermore, a new pyramid network and an optimized detection head module are presented to improve the recognition and interpretation of sika deer postures in intricate environments. The experimental results demonstrate that the model achieves 91.6% accuracy in recognizing the posture of sika deer, with a 6% improvement in accuracy and a 4.6% increase in mAP50 compared to YOLOv8n. Compared to other models in the YOLO series, such as YOLOv5n, YOLOv7-tiny, YOLOv8n, YOLOv8s, YOLOv9, and YOLOv10, this model exhibits higher accuracy, and improved mAP50 and mAP50-95 values. The overall performance is commendable, meeting the requirements for accurate and rapid identification of the posture of sika deer. This model proves beneficial for the precise and real-time monitoring of sika deer posture in complex breeding environments and under all-weather conditions.
随着梅花鹿养殖业的大规模蓬勃发展,准确评估这些动物的健康状况至关重要。通过目标检测实现姿态识别是监测梅花鹿健康状况的重要方法。这种方法能够更细致地了解它们的身体状况,确保该行业能够维持高标准的动物福利和生产力。为了在不干扰动物自然行为的情况下实现对梅花鹿的远程监测,并提高动物福利,本文提出了一种基于YOLOv8的梅花鹿个体姿态识别检测算法GFI-YOLOv8。首先,本文提出在主干网络模块的C2f中添加iAFF迭代注意力特征融合模块,用AIFI模块替换原来的SPPF模块,并利用注意力机制自适应调整特征通道。这旨在增强粒度,提高模型的识别能力,并增强对复杂场景中梅花鹿行为的理解。其次,引入了一种新颖的卷积神经网络模块,以提高特征提取的效率和准确性,同时保持模型的深度和多样性。此外,还提出了一种新的注意力机制模块来扩大感受野并简化模型。再者,提出了一种新的金字塔网络和优化的检测头模块,以提高在复杂环境中对梅花鹿姿态的识别和解读能力。实验结果表明,该模型在识别梅花鹿姿态方面的准确率达到91.6%,与YOLOv8n相比,准确率提高了6%,mAP50提高了4.6%。与YOLO系列中的其他模型,如YOLOv5n、YOLOv7-tiny、YOLOv8n、YOLOv8s、YOLOv9和YOLOv10相比,该模型具有更高的准确率,以及更高的mAP50和mAP50-95值。整体性能值得称赞,满足了对梅花鹿姿态进行准确快速识别的要求。该模型对于在复杂养殖环境和全天候条件下对梅花鹿姿态进行精确实时监测具有重要意义。