Hall-Solorio Allan, Ramirez-Alonso Graciela, Chay-Canul Alfonso Juventino, Lee-Rangel Héctor A, Vargas-Bello-Pérez Einar, Lopez-Flores David R
Computer Vision and Data Science Lab, Facultad de Ingeniería, Universidad Autónoma de Chihuahua, Circuito Universitario Campus II, Chihuahua 31125, Mexico.
División Académica de Ciencias Agropecuarias, Universidad Juárez Autónoma de Tabasco, Carr. Villahermosa-Teapa, km 25, Villahermosa 86280, Mexico.
Animals (Basel). 2025 Jul 21;15(14):2146. doi: 10.3390/ani15142146.
This study analyzes the use of a lightweight image-based deep learning model to classify dairy cows into low-, medium-, and high-milk-yield categories by automatically detecting the udder region of the cow. The implemented model was based on the YOLOv11 architecture, which enables efficient object detection and classification with real-time performance. The model is trained on a public dataset of cow images labeled with 305-day milk yield records. Thresholds were established to define the three yield classes, and a balanced subset of labeled images was selected for training, validation, and testing purposes. To assess the robustness and consistency of the proposed approach, the model was trained 30 times following the same experimental protocol. The system achieves precision, recall, and mean Average Precision (mAP@50) of 0.408 ± 0.044, 0.739 ± 0.095, and 0.492 ± 0.031, respectively, across all classes. The highest precision (0.445 ± 0.055), recall (0.766 ± 0.107), and mAP@50 (0.558 ± 0.036) were observed in the low-yield class. Qualitative analysis revealed that misclassifications mainly occurred near class boundaries, emphasizing the importance of consistent image acquisition conditions. The resulting model was deployed in a mobile application designed to support field-level assessment by non-specialist users. These findings demonstrate the practical feasibility of applying vision-based models to support decision-making in dairy production systems, particularly in settings where traditional data collection methods are unavailable or impractical.
本研究分析了一种基于轻量级图像的深度学习模型的应用,该模型通过自动检测奶牛的乳房区域,将奶牛分为低产、中产和高产类别。所实现的模型基于YOLOv11架构,能够实现高效的目标检测和实时分类。该模型在一个标注有305天产奶量记录的奶牛图像公共数据集上进行训练。设定了阈值来定义三个产量类别,并选择了一个平衡的标注图像子集用于训练、验证和测试。为了评估所提方法的稳健性和一致性,按照相同的实验方案对模型进行了30次训练。该系统在所有类别上的精确率、召回率和平均精度均值(mAP@50)分别为0.408±0.044、0.739±0.095和0.492±0.031。在低产类别中观察到最高的精确率(0.445±0.055)、召回率(0.766±0.107)和mAP@50(0.558±0.036)。定性分析表明,错误分类主要发生在类别边界附近,这凸显了一致的图像采集条件的重要性。最终的模型被部署到一个移动应用程序中,旨在支持非专业用户进行现场评估。这些发现证明了应用基于视觉的模型来支持奶牛生产系统决策的实际可行性,特别是在传统数据收集方法不可用或不实用的情况下。