Chen Yini, Gao Ronghua, Li Qifeng, Zhao Hongtao, Wang Rong, Ding Luyu, Li Xuwen
Information Technology Research Center, Beijing Academy of Agriculture and Forestry Sciences, Beijing, 100097, China.
Department of Mathematics And Physics, North China Electric Power University, Beijing, 102206, China.
Sci Rep. 2025 Jul 2;15(1):23228. doi: 10.1038/s41598-025-97929-w.
Grassland sheep counting is essential for both animal husbandry and ecological balance. Accurate population statistics help optimize livestock management and sustain grassland ecosystems. However, traditional counting methods are time-consuming and costly, especially for dense sheep herds. Computer vision offers a cost-effective and labor-efficient alternative, but existing methods still face challenges. Object detection-based counting often leads to overcounts or missed detections, while instance segmentation requires extensive annotation efforts. To better align with the practical task of counting sheep on grasslands, we collected the Sheep1500 UAV Dataset using drones in real-world settings. The varying flight altitudes, diverse scenes, and different density levels captured by the drones endow our dataset with a high degree of diversity. To address the challenge of inaccurate counting caused by background object interference in this dataset, we propose a dual-branch multi-level attention network based on density map regression. DASNet is built on a modified VGG-19 architecture, where a dual-branch structure is employed to integrate both shallow and deep features. A Conv Convolutional Block Attention Layer (CCBL) is incorporated into the network to more effectively focus on sheep regions, alongside a Multi-Level Attention Module (MAM) in the deep feature branch. The MAM, consisting of three Light Channel and Pixel Attention Modules (LCPM), is designed to refine feature representation at both the channel and pixel levels, improving the accuracy of density map generation for sheep counting. In addition, a residual structure connects each module, facilitating feature fusion across different levels and offering increased flexibility in handling diverse information. The LCPM leverages the advantages of attention mechanisms to more effectively extract multi-scale global features of the sheep regions, thereby helping the network to reduce the loss of deep feature information. Experiments conducted on our Sheep1500 UAV Dataset have demonstrated that DASNet significantly outperforms the baseline MAN network, with a Mean Absolute Error (MAE) of 3.95 and a Mean Squared Error (MSE) of 4.87, compared to the baseline's MAE of 5.39 and MSE of 6.50. DASNet is shown to be effective in handling challenging scenarios, such as dense flocks and background noise, due to its dual-branch feature enhancement and global multi-level feature fusion. DASNet has shown promising results in accuracy and computational efficiency, making it an ideal solution for practical sheep counting in precision agriculture.
草原绵羊数量统计对于畜牧业和生态平衡都至关重要。准确的种群统计有助于优化牲畜管理并维持草原生态系统。然而,传统的计数方法既耗时又昂贵,尤其是对于密集的羊群。计算机视觉提供了一种经济高效且省力的替代方法,但现有方法仍然面临挑战。基于目标检测的计数往往会导致计数过多或漏检,而实例分割则需要大量的标注工作。为了更好地与草原上绵羊计数的实际任务相匹配,我们在现实场景中使用无人机收集了Sheep1500无人机数据集。无人机捕获的不同飞行高度、多样场景和不同密度水平赋予了我们的数据集高度的多样性。为了解决该数据集中由背景物体干扰导致的计数不准确问题,我们提出了一种基于密度图回归的双分支多级注意力网络。DASNet基于改进的VGG - 19架构构建,采用双分支结构来整合浅层和深层特征。一个卷积块注意力层(CCBL)被纳入网络,以便更有效地关注绵羊区域,同时在深层特征分支中还有一个多级注意力模块(MAM)。MAM由三个轻量级通道和像素注意力模块(LCPM)组成,旨在在通道和像素级别细化特征表示,提高绵羊计数密度图生成的准确性。此外,一个残差结构连接每个模块,促进不同级别之间的特征融合,并在处理多样信息时提供更大的灵活性。LCPM利用注意力机制的优势更有效地提取绵羊区域的多尺度全局特征,从而帮助网络减少深层特征信息的损失。在我们的Sheep1500无人机数据集上进行的实验表明,DASNet明显优于基线MAN网络,其平均绝对误差(MAE)为3.95,均方误差(MSE)为4.87,而基线的MAE为5.39,MSE为6.50。由于其双分支特征增强和全局多级特征融合,DASNet在处理具有挑战性的场景(如密集羊群和背景噪声)方面被证明是有效的。DASNet在准确性和计算效率方面都显示出了有前景的结果,使其成为精准农业中实际绵羊计数的理想解决方案。
Sci Rep. 2025-7-2
Arch Ital Urol Androl. 2025-6-30
PeerJ Comput Sci. 2025-5-19
Animals (Basel). 2025-6-18
Health Technol Assess. 2024-10
Cochrane Database Syst Rev. 2017-12-22
Data Brief. 2025-4-28
Health Technol Assess. 2001
Cochrane Database Syst Rev. 2022-5-20
Sci Rep. 2025-1-22
Sci Rep. 2024-11-29
IEEE Trans Pattern Anal Mach Intell. 2023-2