Mahato Shubhangi, Bi Hanqing, Neethirajan Suresh
Faculty of Computer Science, Dalhousie University, Halifax, NS, Canada.
Faculty of Mathematics, Dalhousie University, Waterloo, ON, Canada.
Front Artif Intell. 2025 Aug 22;8:1545247. doi: 10.3389/frai.2025.1545247. eCollection 2025.
Precision livestock farming increasingly relies on non-invasive, high-fidelity systems capable of monitoring cattle with minimal disruption to behavior or welfare. Conventional identification methods, such as ear tags and wearable sensors, often compromise animal comfort and produce inconsistent data under real-world farm conditions. This study introduces Dairy DigiD, a deep learning-based biometric classification framework that categorizes dairy cattle into four physiologically defineda groups-young, mature milking, pregnant, and dry cows-using high-resolution facial images. The system combines two complementary approaches: a DenseNet121 model for full-image classification, offering global visual context, and Detectron2 for fine-grained facial analysis. Dairy DigiD leverages Detectron2's multi-task architecture, using instance segmentation and keypoint detection across 30 anatomical landmarks (eyes, ears, muzzle) to refine facial localization and improve classification robustness. While DenseNet121 delivered strong baseline performance, its sensitivity to background noise limited generalizability. In contrast, Detectron2 demonstrated superior adaptability in uncontrolled farm environments, achieving classification accuracies between 93 and 98%. Its keypoint-driven strategy enabled robust feature localization and resilience to occlusions, lighting variations, and heterogeneous backgrounds. Cross-validation and perturbation-based explainability confirmed that biologically salient features guided classification, enhancing model transparency. By integrating animal-centric design with scalable AI, Dairy DigiD represents a significant advancement in automated livestock monitoring-offering an ethical, accurate, and practical alternative to traditional identification methods. The approach sets a precedent for responsible, data-driven decision-making in precision dairy management.
精准畜牧业越来越依赖于非侵入性、高保真系统,这些系统能够在对牛的行为或福利造成最小干扰的情况下对其进行监测。传统的识别方法,如耳标和可穿戴传感器,往往会影响动物的舒适度,并且在实际农场条件下产生的数据不一致。本研究介绍了Dairy DigiD,这是一个基于深度学习的生物特征分类框架,它使用高分辨率面部图像将奶牛分为四个生理定义的组——青年牛、成年泌乳牛、怀孕牛和干奶牛。该系统结合了两种互补方法:用于全图像分类的DenseNet121模型,提供全局视觉上下文,以及用于细粒度面部分析的Detectron2。Dairy DigiD利用Detectron2的多任务架构,通过对30个解剖标志(眼睛、耳朵、口鼻)进行实例分割和关键点检测来优化面部定位并提高分类的鲁棒性。虽然DenseNet121提供了强大的基线性能,但其对背景噪声的敏感性限制了通用性。相比之下,Detectron2在不受控制的农场环境中表现出卓越的适应性,分类准确率在93%至98%之间。其基于关键点的策略实现了强大的特征定位以及对遮挡、光照变化和异质背景的适应性。交叉验证和基于扰动的可解释性证实,生物学上显著的特征指导了分类,提高了模型的透明度。通过将以动物为中心的设计与可扩展的人工智能相结合,Dairy DigiD代表了自动牲畜监测方面的重大进展——为传统识别方法提供了一种符合道德、准确且实用的替代方案。该方法为精准奶牛管理中负责任的数据驱动决策树立了先例。