Yin Zhaoyang, Wang Zehua, Ye Junhua, Zhou Suyin, Xu Aijun
School of Mathematics and Computer Science, Zhejiang Agriculture and Forestry University, Hangzhou 311300, China.
School of Environmental and Resource Science, Zhejiang Agriculture and Forestry University, Hangzhou 311300, China.
Animals (Basel). 2025 Apr 3;15(7):1040. doi: 10.3390/ani15071040.
Pig tracking contributes to the assessment of pig behaviour and health. However, pig tracking on real farms is very difficult. Owing to incomplete camera field of view (FOV), pigs frequently entering and exiting the camera FOV affect the tracking accuracy. To improve pig-tracking efficiency, we propose a pig-tracking method that is based on skeleton feature similarity, which we named GcnTrack. We used YOLOv7-Pose to extract pig skeleton key points and design a dual-tracking strategy. This strategy combines IOU matching and skeleton keypoint-based graph convolutional reidentification (Re-ID) algorithms to track pigs continuously, even when pigs return from outside the FOV. Three identical FOV sets of data that separately included long, medium, and short duration videos were used to test the model and verify its performance. The GcnTrack method achieved a Multiple Object Tracking Accuracy (MOTA) of 84.98% and an identification F1 Score (IDF1) of 82.22% for the first set of videos (short duration, 87 s to 220 s). The tracking precision was 74% for the second set of videos (medium duration, average 302 s). The pigs entered the scene 15.29 times on average, with an average of 6.28 identity switches (IDSs) per pig during the tracking experiments on the third batch set of videos (long duration, 14 min). In conclusion, our method contributes an accurate and reliable pig-tracking solution applied to scenarios with incomplete camera FOV.
猪的跟踪有助于评估猪的行为和健康状况。然而,在实际农场中对猪进行跟踪非常困难。由于摄像头视野(FOV)不完整,猪频繁进出摄像头视野会影响跟踪精度。为了提高猪的跟踪效率,我们提出了一种基于骨骼特征相似度的猪跟踪方法,我们将其命名为GcnTrack。我们使用YOLOv7-Pose来提取猪骨骼关键点,并设计了一种双重跟踪策略。即使猪从视野外返回,该策略也能结合交并比(IOU)匹配和基于骨骼关键点的图卷积重识别(Re-ID)算法来持续跟踪猪。我们使用了三个相同的视野数据集,分别包含长、中、短时长的视频,来测试模型并验证其性能。对于第一组视频(短时长,87秒至220秒),GcnTrack方法的多目标跟踪准确率(MOTA)达到了84.98%,识别F1分数(IDF1)达到了82.22%。对于第二组视频(中时长,平均302秒),跟踪精度为74%。在第三组视频(长时长,14分钟)的跟踪实验中,猪平均进入场景15.29次,每头猪平均身份切换(IDS)6.28次。总之,我们的方法为应用于摄像头视野不完整场景的精确可靠的猪跟踪解决方案做出了贡献。