Suppr超能文献

用于群体饲养猪的注意力引导实例分割

Attention-Guided Instance Segmentation for Group-Raised Pigs.

作者信息

Hu Zhiwei, Yang Hua, Yan Hongwen

机构信息

College of Information Science and Engineering, Shanxi Agricultural University, Jinzhong 030801, China.

出版信息

Animals (Basel). 2023 Jul 3;13(13):2181. doi: 10.3390/ani13132181.

Abstract

In the pig farming environment, complex factors such as pig adhesion, occlusion, and changes in body posture pose significant challenges for segmenting multiple target pigs. To address these challenges, this study collected video data using a horizontal angle of view and a non-fixed lens. Specifically, a total of 45 pigs aged 20-105 days in 8 pens were selected as research subjects, resulting in 1917 labeled images. These images were divided into 959 for training, 192 for validation, and 766 for testing. The grouped attention module was employed in the feature pyramid network to fuse the feature maps from deep and shallow layers. The grouped attention module consists of a channel attention branch and a spatial attention branch. The channel attention branch effectively models dependencies between channels to enhance feature mapping between related channels and improve semantic feature representation. The spatial attention branch establishes pixel-level dependencies by applying the response values of all pixels in a single-channel feature map to the target pixel. It further guides the original feature map to filter spatial location information and generate context-related outputs. The grouped attention, along with data augmentation strategies, was incorporated into the Mask R-CNN and Cascade Mask R-CNN task networks to explore their impact on pig segmentation. The experiments showed that introducing data augmentation strategies improved the segmentation performance of the model to a certain extent. Taking Mask-RCNN as an example, under the same experimental conditions, the introduction of data augmentation strategies resulted in improvements of 1.5%, 0.7%, 0.4%, and 0.5% in metrics , , , and , respectively. Furthermore, our grouped attention module achieved the best performance. For example, compared to the existing attention module CBAM, taking Mask R-CNN as an example, in terms of the metric , , , and , the grouped attention outperformed 1.0%, 0.3%, 1.1%, and 1.2%, respectively. We further studied the impact of the number of groups in the grouped attention on the final segmentation results. Additionally, visualizations of predictions on third-party data collected using a top-down data acquisition method, which was not involved in the model training, demonstrated that the proposed model in this paper still achieved good segmentation results, proving the transferability and robustness of the grouped attention. Through comprehensive analysis, we found that grouped attention is beneficial for achieving high-precision segmentation of individual pigs in different scenes, ages, and time periods. The research results can provide references for subsequent applications such as pig identification and behavior analysis in mobile settings.

摘要

在养猪环境中,猪的粘连、遮挡以及身体姿势变化等复杂因素给多目标猪的分割带来了重大挑战。为应对这些挑战,本研究使用水平视角和非固定镜头收集视频数据。具体而言,从8个猪栏中选取了45头年龄在20至105天的猪作为研究对象,共获得1917张标注图像。这些图像被分为959张用于训练,192张用于验证,766张用于测试。在特征金字塔网络中采用分组注意力模块来融合深层和浅层的特征图。分组注意力模块由通道注意力分支和空间注意力分支组成。通道注意力分支有效地对通道间的依赖关系进行建模,以增强相关通道间的特征映射并改善语义特征表示。空间注意力分支通过将单通道特征图中所有像素的响应值应用于目标像素来建立像素级依赖关系。它进一步引导原始特征图过滤空间位置信息并生成与上下文相关的输出。将分组注意力与数据增强策略相结合,纳入Mask R-CNN和Cascade Mask R-CNN任务网络,以探究它们对猪分割的影响。实验表明,引入数据增强策略在一定程度上提高了模型的分割性能。以Mask-RCNN为例,在相同实验条件下,引入数据增强策略使指标、、、分别提高了1.5%、0.7%、0.4%和0.5%。此外,我们的分组注意力模块表现最佳。例如,与现有的注意力模块CBAM相比,以Mask R-CNN为例,在指标、、、方面,分组注意力分别优于1.0%、0.3%、1.1%和1.2%。我们进一步研究了分组注意力中分组数量对最终分割结果的影响。此外,对使用自上而下数据采集方法收集的第三方数据进行预测可视化(该方法未参与模型训练),结果表明本文提出的模型仍取得了良好的分割效果,证明了分组注意力的可迁移性和鲁棒性。通过综合分析,我们发现分组注意力有利于在不同场景、年龄和时间段实现对个体猪的高精度分割。研究结果可为后续诸如移动场景下的猪识别和行为分析等应用提供参考。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a299/10339863/f9107111c68d/animals-13-02181-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验