• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

一种基于轻量化改进YOLOV5的茶芽识别与定位方法。

A method of identification and localization of tea buds based on lightweight improved YOLOV5.

作者信息

Wang Yuanhong, Lu Jinzhu, Wang Qi, Gao Zongmei

机构信息

Modern Agricultural Equipment Research Institute, Xihua University, Chengdu, China.

School of Mechanical Engineering, Xihua University, Chengdu, China.

出版信息

Front Plant Sci. 2024 Nov 28;15:1488185. doi: 10.3389/fpls.2024.1488185. eCollection 2024.

DOI:10.3389/fpls.2024.1488185
PMID:39670263
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11634601/
Abstract

The low degree of intelligence and standardization of tea bud picking, as well as laborious and time-consuming manual harvesting, bring significant challenges to the sustainable development of the high-quality tea industry. There is an urgent need to investigate the critical technologies of intelligent picking robots for tea. The complexity of the model requires high hardware computing resources, which limits the deployment of the tea bud detection model in tea-picking robots. Therefore, in this study, we propose the YOLOV5M-SBSD tea bud lightweight detection model to address the above issues. The Fuding white tea bud image dataset was established by collecting Fuding white tea images; then the lightweight network ShuffleNetV2 was used to replace the YOLOV5 backbone network; the up-sampling algorithm of YOLOV5 was optimized by using CARAFE modular structure, which increases the sensory field of the network while maintaining the lightweight; then BiFPN was used to achieve more efficient multi-scale feature fusion; and the introduction of the parameter-free attention SimAm to enhance the feature extraction ability of the model while not adding extra computation. The improved model was denoted as YOLOV5M-SBSD and compared and analyzed with other mainstream target detection models. Then, the YOLOV5M-SBSD recognition model is experimented on with the tea bud dataset, and the tea buds are recognized using YOLOV5M-SBSD. The experimental results show that the recognition accuracy of tea buds is 88.7%, the recall rate is 86.9%, and the average accuracy is 93.1%, which is 0.5% higher than the original YOLOV5M algorithm's accuracy, the average accuracy is 0.2% higher, the Size is reduced by 82.89%, and the Params, and GFlops are reduced by 83.7% and 85.6%, respectively. The improved algorithm has higher detection accuracy while reducing the amount of computation and parameters. Also, it reduces the dependence on hardware, provides a reference for deploying the tea bud target detection model in the natural environment of the tea garden, and has specific theoretical and practical significance for the identification and localization of the intelligent picking robot of tea buds.

摘要

茶叶采摘智能化程度低、标准化程度低,且人工采摘费力又耗时,给高品质茶叶产业的可持续发展带来了重大挑战。迫切需要研究茶叶智能采摘机器人的关键技术。该模型的复杂性需要高硬件计算资源,这限制了茶芽检测模型在采茶机器人中的部署。因此,在本研究中,我们提出了YOLOV5M-SBSD茶芽轻量级检测模型来解决上述问题。通过采集福鼎白茶图像建立福鼎白茶芽图像数据集;然后使用轻量级网络ShuffleNetV2替换YOLOV5主干网络;利用CARAFE模块结构优化YOLOV5的上采样算法,在保持轻量级的同时增加网络的感受野;然后使用BiFPN实现更高效的多尺度特征融合;并引入无参数注意力SimAm来增强模型的特征提取能力,同时不增加额外计算量。改进后的模型记为YOLOV5M-SBSD,并与其他主流目标检测模型进行比较分析。然后,使用茶芽数据集对YOLOV5M-SBSD识别模型进行实验,并使用YOLOV5M-SBSD识别茶芽。实验结果表明,茶芽识别准确率为88.7%,召回率为86.9%,平均精度为93.1%,比原YOLOV5M算法的准确率高0.5%,平均精度高0.2%,尺寸减少82.89%,参数和GFlops分别减少83.7%和85.6%。改进后的算法在减少计算量和参数的同时具有更高的检测精度。此外,它降低了对硬件的依赖,为在茶园自然环境中部署茶芽目标检测模型提供了参考,对茶芽智能采摘机器人的识别与定位具有一定的理论和实际意义。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/21a5/11634601/1c4d0d52a4ae/fpls-15-1488185-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/21a5/11634601/1964d6bcff8c/fpls-15-1488185-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/21a5/11634601/0b0fe48c179d/fpls-15-1488185-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/21a5/11634601/f011bd9f7b12/fpls-15-1488185-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/21a5/11634601/f910ab418f7d/fpls-15-1488185-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/21a5/11634601/e73bcf1b1402/fpls-15-1488185-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/21a5/11634601/e53f2c54b689/fpls-15-1488185-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/21a5/11634601/95a842f5ab33/fpls-15-1488185-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/21a5/11634601/0e627271e66a/fpls-15-1488185-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/21a5/11634601/2db8cd1e9516/fpls-15-1488185-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/21a5/11634601/42160f31b06a/fpls-15-1488185-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/21a5/11634601/bcd7868b3e1d/fpls-15-1488185-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/21a5/11634601/1837177cdfd3/fpls-15-1488185-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/21a5/11634601/e5d4e19807f2/fpls-15-1488185-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/21a5/11634601/5f262204d910/fpls-15-1488185-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/21a5/11634601/686e097b0d15/fpls-15-1488185-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/21a5/11634601/27f769155aea/fpls-15-1488185-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/21a5/11634601/fc1970e620c1/fpls-15-1488185-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/21a5/11634601/1c4d0d52a4ae/fpls-15-1488185-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/21a5/11634601/1964d6bcff8c/fpls-15-1488185-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/21a5/11634601/0b0fe48c179d/fpls-15-1488185-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/21a5/11634601/f011bd9f7b12/fpls-15-1488185-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/21a5/11634601/f910ab418f7d/fpls-15-1488185-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/21a5/11634601/e73bcf1b1402/fpls-15-1488185-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/21a5/11634601/e53f2c54b689/fpls-15-1488185-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/21a5/11634601/95a842f5ab33/fpls-15-1488185-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/21a5/11634601/0e627271e66a/fpls-15-1488185-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/21a5/11634601/2db8cd1e9516/fpls-15-1488185-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/21a5/11634601/42160f31b06a/fpls-15-1488185-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/21a5/11634601/bcd7868b3e1d/fpls-15-1488185-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/21a5/11634601/1837177cdfd3/fpls-15-1488185-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/21a5/11634601/e5d4e19807f2/fpls-15-1488185-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/21a5/11634601/5f262204d910/fpls-15-1488185-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/21a5/11634601/686e097b0d15/fpls-15-1488185-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/21a5/11634601/27f769155aea/fpls-15-1488185-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/21a5/11634601/fc1970e620c1/fpls-15-1488185-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/21a5/11634601/1c4d0d52a4ae/fpls-15-1488185-g018.jpg

相似文献

1
A method of identification and localization of tea buds based on lightweight improved YOLOV5.一种基于轻量化改进YOLOV5的茶芽识别与定位方法。
Front Plant Sci. 2024 Nov 28;15:1488185. doi: 10.3389/fpls.2024.1488185. eCollection 2024.
2
Lightweight tea bud detection method based on improved YOLOv5.基于改进YOLOv5的轻量级茶芽检测方法
Sci Rep. 2024 Dec 28;14(1):31168. doi: 10.1038/s41598-024-82529-x.
3
Tea Bud Detection Model in a Real Picking Environment Based on an Improved YOLOv5.基于改进YOLOv5的真实采摘环境下茶芽检测模型
Biomimetics (Basel). 2024 Nov 13;9(11):692. doi: 10.3390/biomimetics9110692.
4
Lightweight tea bud recognition network integrating GhostNet and YOLOv5.融合GhostNet与YOLOv5的轻量级茶芽识别网络
Math Biosci Eng. 2022 Sep 5;19(12):12897-12914. doi: 10.3934/mbe.2022602.
5
A tea bud segmentation, detection and picking point localization based on the MDY7-3PTB model.一种基于MDY7-3PTB模型的茶芽分割、检测与采摘点定位方法。
Front Plant Sci. 2023 Sep 28;14:1199473. doi: 10.3389/fpls.2023.1199473. eCollection 2023.
6
Small target tea bud detection based on improved YOLOv5 in complex background.基于改进YOLOv5的复杂背景下小目标茶芽检测
Front Plant Sci. 2024 Jun 3;15:1393138. doi: 10.3389/fpls.2024.1393138. eCollection 2024.
7
TBF-YOLOv8n: A Lightweight Tea Bud Detection Model Based on YOLOv8n Improvements.TBF-YOLOv8n:一种基于YOLOv8n改进的轻量级茶芽检测模型。
Sensors (Basel). 2025 Jan 18;25(2):547. doi: 10.3390/s25020547.
8
T-YOLO: a lightweight and efficient detection model for nutrient buds in complex tea-plantation environments.T-YOLO:一种适用于复杂茶园环境中芽苗检测的轻量级、高效检测模型。
J Sci Food Agric. 2024 Aug 15;104(10):5698-5711. doi: 10.1002/jsfa.13396. Epub 2024 Mar 4.
9
TBC-YOLOv7: a refined YOLOv7-based algorithm for tea bud grading detection.TBC-YOLOv7:一种基于YOLOv7的改进型茶芽分级检测算法。
Front Plant Sci. 2023 Aug 17;14:1223410. doi: 10.3389/fpls.2023.1223410. eCollection 2023.
10
Lightweight Algorithm for Apple Detection Based on an Improved YOLOv5 Model.基于改进YOLOv5模型的苹果检测轻量级算法
Plants (Basel). 2023 Aug 23;12(17):3032. doi: 10.3390/plants12173032.

本文引用的文献

1
A Tea Buds Counting Method Based on YOLOv5 and Kalman Filter Tracking Algorithm.一种基于YOLOv5和卡尔曼滤波跟踪算法的茶芽计数方法。
Plant Phenomics. 2023;5:0030. doi: 10.34133/plantphenomics.0030. Epub 2023 Mar 30.
2
Lightweight tea bud recognition network integrating GhostNet and YOLOv5.融合GhostNet与YOLOv5的轻量级茶芽识别网络
Math Biosci Eng. 2022 Sep 5;19(12):12897-12914. doi: 10.3934/mbe.2022602.
3
Identification and picking point positioning of tender tea shoots based on MR3P-TS model.基于MR3P-TS模型的嫩茶芽识别与采摘点定位
Front Plant Sci. 2022 Aug 12;13:962391. doi: 10.3389/fpls.2022.962391. eCollection 2022.
4
Research on Mask-Wearing Detection Algorithm Based on Improved YOLOv5.基于改进 YOLOv5 的口罩佩戴检测算法研究。
Sensors (Basel). 2022 Jun 29;22(13):4933. doi: 10.3390/s22134933.
5
Enhancing Geometric Factors in Model Learning and Inference for Object Detection and Instance Segmentation.增强目标检测与实例分割模型学习与推理中的几何因素
IEEE Trans Cybern. 2022 Aug;52(8):8574-8586. doi: 10.1109/TCYB.2021.3095305. Epub 2022 Jul 19.
6
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.更快的 R-CNN:基于区域建议网络的实时目标检测。
IEEE Trans Pattern Anal Mach Intell. 2017 Jun;39(6):1137-1149. doi: 10.1109/TPAMI.2016.2577031. Epub 2016 Jun 6.