Suppr超能文献

一种基于轻量化改进YOLOV5的茶芽识别与定位方法。

A method of identification and localization of tea buds based on lightweight improved YOLOV5.

作者信息

Wang Yuanhong, Lu Jinzhu, Wang Qi, Gao Zongmei

机构信息

Modern Agricultural Equipment Research Institute, Xihua University, Chengdu, China.

School of Mechanical Engineering, Xihua University, Chengdu, China.

出版信息

Front Plant Sci. 2024 Nov 28;15:1488185. doi: 10.3389/fpls.2024.1488185. eCollection 2024.

Abstract

The low degree of intelligence and standardization of tea bud picking, as well as laborious and time-consuming manual harvesting, bring significant challenges to the sustainable development of the high-quality tea industry. There is an urgent need to investigate the critical technologies of intelligent picking robots for tea. The complexity of the model requires high hardware computing resources, which limits the deployment of the tea bud detection model in tea-picking robots. Therefore, in this study, we propose the YOLOV5M-SBSD tea bud lightweight detection model to address the above issues. The Fuding white tea bud image dataset was established by collecting Fuding white tea images; then the lightweight network ShuffleNetV2 was used to replace the YOLOV5 backbone network; the up-sampling algorithm of YOLOV5 was optimized by using CARAFE modular structure, which increases the sensory field of the network while maintaining the lightweight; then BiFPN was used to achieve more efficient multi-scale feature fusion; and the introduction of the parameter-free attention SimAm to enhance the feature extraction ability of the model while not adding extra computation. The improved model was denoted as YOLOV5M-SBSD and compared and analyzed with other mainstream target detection models. Then, the YOLOV5M-SBSD recognition model is experimented on with the tea bud dataset, and the tea buds are recognized using YOLOV5M-SBSD. The experimental results show that the recognition accuracy of tea buds is 88.7%, the recall rate is 86.9%, and the average accuracy is 93.1%, which is 0.5% higher than the original YOLOV5M algorithm's accuracy, the average accuracy is 0.2% higher, the Size is reduced by 82.89%, and the Params, and GFlops are reduced by 83.7% and 85.6%, respectively. The improved algorithm has higher detection accuracy while reducing the amount of computation and parameters. Also, it reduces the dependence on hardware, provides a reference for deploying the tea bud target detection model in the natural environment of the tea garden, and has specific theoretical and practical significance for the identification and localization of the intelligent picking robot of tea buds.

摘要

茶叶采摘智能化程度低、标准化程度低,且人工采摘费力又耗时,给高品质茶叶产业的可持续发展带来了重大挑战。迫切需要研究茶叶智能采摘机器人的关键技术。该模型的复杂性需要高硬件计算资源,这限制了茶芽检测模型在采茶机器人中的部署。因此,在本研究中,我们提出了YOLOV5M-SBSD茶芽轻量级检测模型来解决上述问题。通过采集福鼎白茶图像建立福鼎白茶芽图像数据集;然后使用轻量级网络ShuffleNetV2替换YOLOV5主干网络;利用CARAFE模块结构优化YOLOV5的上采样算法,在保持轻量级的同时增加网络的感受野;然后使用BiFPN实现更高效的多尺度特征融合;并引入无参数注意力SimAm来增强模型的特征提取能力,同时不增加额外计算量。改进后的模型记为YOLOV5M-SBSD,并与其他主流目标检测模型进行比较分析。然后,使用茶芽数据集对YOLOV5M-SBSD识别模型进行实验,并使用YOLOV5M-SBSD识别茶芽。实验结果表明,茶芽识别准确率为88.7%,召回率为86.9%,平均精度为93.1%,比原YOLOV5M算法的准确率高0.5%,平均精度高0.2%,尺寸减少82.89%,参数和GFlops分别减少83.7%和85.6%。改进后的算法在减少计算量和参数的同时具有更高的检测精度。此外,它降低了对硬件的依赖,为在茶园自然环境中部署茶芽目标检测模型提供了参考,对茶芽智能采摘机器人的识别与定位具有一定的理论和实际意义。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/21a5/11634601/1964d6bcff8c/fpls-15-1488185-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验