Zhang Fenyun, Sun Hongwei, Xie Shuang, Dong Chunwang, Li You, Xu Yiting, Zhang Zhengwei, Chen Fengnong
School of Automation, Hangzhou Dianzi University, Hangzhou, China.
Tea Research Institute, Shandong Academy of Agricultural Sciences, Jinan, China.
Front Plant Sci. 2023 Sep 28;14:1199473. doi: 10.3389/fpls.2023.1199473. eCollection 2023.
INTRODUCTION: The identification and localization of tea picking points is a prerequisite for achieving automatic picking of famous tea. However, due to the similarity in color between tea buds and young leaves and old leaves, it is difficult for the human eye to accurately identify them. METHODS: To address the problem of segmentation, detection, and localization of tea picking points in the complex environment of mechanical picking of famous tea, this paper proposes a new model called the MDY7-3PTB model, which combines the high-precision segmentation capability of DeepLabv3+ and the rapid detection capability of YOLOv7. This model achieves the process of segmentation first, followed by detection and finally localization of tea buds, resulting in accurate identification of the tea bud picking point. This model replaced the DeepLabv3+ feature extraction network with the more lightweight MobileNetV2 network to improve the model computation speed. In addition, multiple attention mechanisms (CBAM) were fused into the feature extraction and ASPP modules to further optimize model performance. Moreover, to address the problem of class imbalance in the dataset, the Focal Loss function was used to correct data imbalance and improve segmentation, detection, and positioning accuracy. RESULTS AND DISCUSSION: The MDY7-3PTB model achieved a mean intersection over union (mIoU) of 86.61%, a mean pixel accuracy (mPA) of 93.01%, and a mean recall (mRecall) of 91.78% on the tea bud segmentation dataset, which performed better than usual segmentation models such as PSPNet, Unet, and DeeplabV3+. In terms of tea bud picking point recognition and positioning, the model achieved a mean average precision (mAP) of 93.52%, a weighted average of precision and recall (F1 score) of 93.17%, a precision of 97.27%, and a recall of 89.41%. This model showed significant improvements in all aspects compared to existing mainstream YOLO series detection models, with strong versatility and robustness. This method eliminates the influence of the background and directly detects the tea bud picking points with almost no missed detections, providing accurate two-dimensional coordinates for the tea bud picking points, with a positioning precision of 96.41%. This provides a strong theoretical basis for future tea bud picking.
引言:茶叶采摘点的识别与定位是实现名茶自动采摘的前提条件。然而,由于茶芽与嫩叶、老叶之间颜色相近,人眼难以准确识别它们。 方法:为解决名茶机械采摘复杂环境下茶叶采摘点的分割、检测与定位问题,本文提出一种名为MDY7 - 3PTB的新模型,该模型结合了DeepLabv3+的高精度分割能力和YOLOv7的快速检测能力。此模型先实现分割过程,接着进行检测,最后定位茶芽,从而准确识别茶芽采摘点。该模型用更轻量级的MobileNetV2网络替换了DeepLabv3+特征提取网络,以提高模型计算速度。此外,将多个注意力机制(CBAM)融合到特征提取和ASPP模块中,进一步优化模型性能。而且,为解决数据集中的类别不平衡问题,使用Focal Loss函数校正数据不平衡,提高分割、检测和定位精度。 结果与讨论:MDY7 - 3PTB模型在茶芽分割数据集上的平均交并比(mIoU)达到86.61%,平均像素精度(mPA)为93.01%,平均召回率(mRecall)为91.78%,其性能优于PSPNet、Unet和DeeplabV3+等常用分割模型。在茶芽采摘点识别与定位方面,该模型的平均精度均值(mAP)为93.52%,精度和召回率的加权平均值(F1分数)为93.17%,精度为97.27%,召回率为89.41%。与现有的主流YOLO系列检测模型相比,该模型在各方面均有显著提升,具有很强的通用性和鲁棒性。此方法消除了背景的影响,几乎无漏检地直接检测茶芽采摘点,为茶芽采摘点提供准确的二维坐标,定位精度为96.41%。这为未来茶芽采摘提供了有力的理论依据。
Front Plant Sci. 2023-9-28
Front Plant Sci. 2023-8-17
Front Plant Sci. 2024-6-3
Front Plant Sci. 2022-8-12
Sensors (Basel). 2023-4-7
Animals (Basel). 2023-8-4
Front Med (Lausanne). 2023-9-8
Front Plant Sci. 2023-11-27
Sensors (Basel). 2025-1-18
Front Plant Sci. 2024-9-13
Plant Phenomics. 2023-5-11
Plant Phenomics. 2023
Front Plant Sci. 2022-11-22
Front Plant Sci. 2022-8-12
Front Neurol. 2022-7-27
Front Plant Sci. 2022-1-3
BMC Bioinformatics. 2017-12-28