Suppr超能文献

散养猪舍内新生仔猪被困事件的自动检测:YOLO版本4、5和8的比较

Automatic detection of trapping events of postnatal piglets in loose housing pen: comparison of YOLO versions 4, 5, and 8.

作者信息

Yun Taeyong, Kim Jinsul, Yun Jinhyeon, Um Tai-Won

机构信息

Department of Data Science, Chonnam National University, Gwangju 61186, Korea.

School of Electronics and Computer Engineering, Chonnam National University, Gwangju 61186, Korea.

出版信息

J Anim Sci Technol. 2025 May;67(3):666-676. doi: 10.5187/jast.2024.e106. Epub 2025 May 31.

Abstract

In recent years, the pig industry has experienced an alarming surge in piglet mortality shortly after farrowing due to crushing by the sow. This issue has been exacerbated by the adoption of hyperprolific sows and the transition to loose housing pens, adversely affecting both animal welfare and productivity. In response to these challenges, researchers have progressively turned to artificial intelligence of things (AIoT) to address various issues within the livestock sector. The primary objective of this study was to conduct a comparative analysis of different versions of object detection algorithms, aiming to identify the optimal AIoT system for monitoring piglet crushing events based on performance and practicality. The methodology involved extracting relevant footage depicting instances of piglet crushing from recorded farrowing pen videos, which were subsequently condensed into 2-3 min edited clips. These clips were categorized into three classes: no trapping, trapping, and crushing. Data augmentation techniques, including rotation, flipping, and adjustments to saturation and contrast, were applied to enhance the dataset. This study employed three deep learning object recognition algorithms-- You Only Look Once (YOLO)v4-Tiny, YOLOv5s and YOLOv8s--followed by a performance analysis. The average precision (AP) for trapping detection across the models yielded values of 0.963 for YOLOv4-Tiny, and 0.995 for both YOLOv5s, and YOLOv8s. Notably, trapping detection performance was similar between YOLOv5s and YOLOv8s. However, YOLOv5s proved to be the best choice considering its model size of 13.6 MB compared to YOLOv4-Tiny's 22.4 MB and YOLOv8's 21.4 MB. Considering both performance metrics and model size, YOLOv5s emerges as the most suitable model for detecting trapping within an AIoT framework. Future endeavors may leverage this research to refine and expand the scope of AIoT applications in addressing challenges within the pig industry, ultimately contributing to advancements in both animal husbandry practices and technological solutions.

摘要

近年来,养猪业中仔猪在分娩后不久因被母猪挤压而死亡的情况激增,令人担忧。高产母猪的采用以及向开放式猪舍的转变加剧了这一问题,对动物福利和生产力都产生了不利影响。为应对这些挑战,研究人员逐渐转向物联网人工智能(AIoT)来解决畜牧业中的各种问题。本研究的主要目的是对不同版本的目标检测算法进行比较分析,旨在根据性能和实用性确定用于监测仔猪挤压事件的最佳AIoT系统。该方法包括从记录的产仔栏视频中提取描绘仔猪挤压情况的相关片段,随后将其浓缩为2至3分钟的编辑片段。这些片段分为三类:无被困、被困和挤压。应用了包括旋转、翻转以及饱和度和对比度调整在内的数据增强技术来扩充数据集。本研究采用了三种深度学习目标识别算法——You Only Look Once(YOLO)v4-Tiny、YOLOv5s和YOLOv8s,随后进行了性能分析。各模型对被困检测的平均精度(AP),YOLOv4-Tiny为0.963,YOLOv5s和YOLOv8s均为0.995。值得注意的是,YOLOv5s和YOLOv8s之间的被困检测性能相似。然而,考虑到YOLOv5s的模型大小为13.6MB,而YOLOv4-Tiny为22.4MB,YOLOv8为21.4MB,YOLOv5s被证明是最佳选择。综合性能指标和模型大小来看,YOLOv5s是在AIoT框架内检测被困情况的最合适模型。未来的研究可以利用这项研究来改进和扩大AIoT在应对养猪业挑战方面的应用范围,最终推动畜牧业实践和技术解决方案的进步。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/caa0/12159704/7f6e4cc5bef2/jast-67-3-666-g1.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验