• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

改进的YOLOv8与深度相机融合用于串番茄茎采摘点识别与定位的研究

Study on the fusion of improved YOLOv8 and depth camera for bunch tomato stem picking point recognition and localization.

作者信息

Song Guozhu, Wang Jian, Ma Rongting, Shi Yan, Wang Yaqi

机构信息

College of Software, Shanxi Agricultural University, Taigu, China.

出版信息

Front Plant Sci. 2024 Nov 29;15:1447855. doi: 10.3389/fpls.2024.1447855. eCollection 2024.

DOI:10.3389/fpls.2024.1447855
PMID:39678009
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11637874/
Abstract

When harvesting bunch tomatoes, accurately identifying certain fruiting stems proves challenging due to their obstruction by branches and leaves, or their similarity in colour to the branches, main vines, and lateral vines. Additionally, irregularities in the growth pattern of the fruiting pedicels further complicate precise picking point localization, thus impacting harvesting efficiency. Moreover, the fruit stalks being too short or slender poses an obstacle, rendering it impossible for the depth camera to accurately obtain depth information during depth value acquisition. To address these challenges, this paper proposes an enhanced YOLOv8 model integrated with a depth camera for string tomato fruit stalk picking point identification and localization research. Initially, the Fasternet bottleneck in YOLOv8 is replaced with the c2f bottleneck, and the MLCA attention mechanism is added after the backbone network to construct the FastMLCA-YOLOv8 model for fruit stalk recognition. Subsequently, the optimized K-means algorithm, utilizing K-means++ for clustering centre initialization and determining the optimal number of clusters via Silhouette coefficients, is employed to segment the fruit stalk region. Following this, the corrosion operation and Zhang refinement algorithm are used to denoise the segmented fruit stalk region and extract the refined skeletal line, thereby determining the coordinate position of the fruit stalk picking point in the binarized image. Finally, the issue of missing depth values of fruit stalks is addressed by the secondary extraction method to obtain the depth values and 3D coordinate information of the picking points in RGB-D camera coordinates. The experimental results demonstrate that the algorithm accurately identifies and locates the picking points of string tomatoes under complex background conditions, with the identification success rate of the picking points reaching 91.3%. Compared with the YOLOv8 model, the accuracy is improved by 2.8%, and the error of the depth value of the picking points is only ±2.5 mm. This research meets the needs of string tomato picking robots in fruit stalk target detection and provides strong support for the development of string tomato picking technology.

摘要

在收获成串番茄时,由于某些结果茎被枝叶遮挡,或者其颜色与树枝、主蔓和侧蔓相似,准确识别它们具有挑战性。此外,结果花梗生长模式的不规则性进一步使精确采摘点定位复杂化,从而影响收获效率。此外,果柄过短或过细也构成了障碍,使得深度相机在获取深度值时无法准确获得深度信息。为应对这些挑战,本文提出了一种集成深度相机的增强型YOLOv8模型,用于成串番茄果柄采摘点的识别与定位研究。首先,将YOLOv8中的Fasternet瓶颈替换为c2f瓶颈,并在骨干网络后添加MLCA注意力机制,构建用于果柄识别的FastMLCA-YOLOv8模型。随后,采用优化的K-means算法,利用K-means++初始化聚类中心,并通过轮廓系数确定最优聚类数,对果柄区域进行分割。在此之后,使用腐蚀操作和Zhang细化算法对分割后的果柄区域进行去噪,并提取细化的骨架线,从而确定二值化图像中果柄采摘点的坐标位置。最后,通过二次提取方法解决果柄深度值缺失的问题,以获取RGB-D相机坐标下采摘点的深度值和三维坐标信息。实验结果表明,该算法能够在复杂背景条件下准确识别和定位成串番茄的采摘点,采摘点的识别成功率达到91.3%。与YOLOv8模型相比,准确率提高了2.8%,采摘点深度值的误差仅为±2.5毫米。本研究满足了成串番茄采摘机器人在果柄目标检测方面的需求,为成串番茄采摘技术发展提供了有力支持。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac96/11637874/3ebaf6c10a9e/fpls-15-1447855-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac96/11637874/b564e23dfc4a/fpls-15-1447855-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac96/11637874/809423ec0e20/fpls-15-1447855-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac96/11637874/0210582ed4bb/fpls-15-1447855-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac96/11637874/b5f5152e850f/fpls-15-1447855-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac96/11637874/fbbd1b6f1299/fpls-15-1447855-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac96/11637874/b4911521d5a1/fpls-15-1447855-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac96/11637874/5a4333fb777f/fpls-15-1447855-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac96/11637874/6d1ae4f8f67c/fpls-15-1447855-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac96/11637874/d67f29ec0f25/fpls-15-1447855-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac96/11637874/e8940ee335e1/fpls-15-1447855-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac96/11637874/57441ee8a3d9/fpls-15-1447855-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac96/11637874/f6f7fec88ed3/fpls-15-1447855-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac96/11637874/f6aa1b6d118a/fpls-15-1447855-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac96/11637874/998a24fbadcc/fpls-15-1447855-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac96/11637874/3ebaf6c10a9e/fpls-15-1447855-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac96/11637874/b564e23dfc4a/fpls-15-1447855-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac96/11637874/809423ec0e20/fpls-15-1447855-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac96/11637874/0210582ed4bb/fpls-15-1447855-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac96/11637874/b5f5152e850f/fpls-15-1447855-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac96/11637874/fbbd1b6f1299/fpls-15-1447855-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac96/11637874/b4911521d5a1/fpls-15-1447855-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac96/11637874/5a4333fb777f/fpls-15-1447855-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac96/11637874/6d1ae4f8f67c/fpls-15-1447855-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac96/11637874/d67f29ec0f25/fpls-15-1447855-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac96/11637874/e8940ee335e1/fpls-15-1447855-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac96/11637874/57441ee8a3d9/fpls-15-1447855-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac96/11637874/f6f7fec88ed3/fpls-15-1447855-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac96/11637874/f6aa1b6d118a/fpls-15-1447855-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac96/11637874/998a24fbadcc/fpls-15-1447855-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac96/11637874/3ebaf6c10a9e/fpls-15-1447855-g015.jpg

相似文献

1
Study on the fusion of improved YOLOv8 and depth camera for bunch tomato stem picking point recognition and localization.改进的YOLOv8与深度相机融合用于串番茄茎采摘点识别与定位的研究
Front Plant Sci. 2024 Nov 29;15:1447855. doi: 10.3389/fpls.2024.1447855. eCollection 2024.
2
Multi-stage tomato fruit recognition method based on improved YOLOv8.基于改进YOLOv8的多阶段番茄果实识别方法
Front Plant Sci. 2024 Sep 5;15:1447263. doi: 10.3389/fpls.2024.1447263. eCollection 2024.
3
Barrier-free tomato fruit selection and location based on optimized semantic segmentation and obstacle perception algorithm.基于优化语义分割和障碍物感知算法的无障碍番茄果实挑选与定位
Front Plant Sci. 2024 Oct 31;15:1460060. doi: 10.3389/fpls.2024.1460060. eCollection 2024.
4
Vision-Based Localization Method for Picking Points in Tea-Harvesting Robots.基于视觉的茶叶采摘机器人采摘点定位方法
Sensors (Basel). 2024 Oct 22;24(21):6777. doi: 10.3390/s24216777.
5
Improved Multi-Size, Multi-Target and 3D Position Detection Network for Flowering Chinese Cabbage Based on YOLOv8.基于YOLOv8的改进型小白菜多尺寸、多目标及三维位置检测网络
Plants (Basel). 2024 Oct 7;13(19):2808. doi: 10.3390/plants13192808.
6
YOLOv8s-Longan: a lightweight detection method for the longan fruit-picking UAV.YOLOv8s-Longan:一种用于龙眼果实采摘无人机的轻量级检测方法。
Front Plant Sci. 2025 Jan 22;15:1518294. doi: 10.3389/fpls.2024.1518294. eCollection 2024.
7
GPC-YOLO: An Improved Lightweight YOLOv8n Network for the Detection of Tomato Maturity in Unstructured Natural Environments.GPC-YOLO:一种改进的轻量级YOLOv8n网络,用于在非结构化自然环境中检测番茄成熟度。
Sensors (Basel). 2025 Feb 28;25(5):1502. doi: 10.3390/s25051502.
8
A novel hand-eye calibration method of picking robot based on TOF camera.一种基于TOF相机的抓取机器人手眼标定新方法。
Front Plant Sci. 2023 Jan 17;13:1099033. doi: 10.3389/fpls.2022.1099033. eCollection 2022.
9
An occluded cherry tomato recognition model based on improved YOLOv7.基于改进YOLOv7的遮挡樱桃番茄识别模型
Front Plant Sci. 2023 Oct 20;14:1260808. doi: 10.3389/fpls.2023.1260808. eCollection 2023.
10
Research on Corn Leaf and Stalk Recognition and Ranging Technology Based on LiDAR and Camera Fusion.基于激光雷达和相机融合的玉米叶和茎识别与测距技术研究。
Sensors (Basel). 2024 Aug 22;24(16):5422. doi: 10.3390/s24165422.

引用本文的文献

1
Research on detection and location method of safflower filament picking points during the blooming period in unstructured environments.非结构化环境下红花花期花丝采摘点检测与定位方法研究
Sci Rep. 2025 Mar 29;15(1):10851. doi: 10.1038/s41598-025-95620-8.

本文引用的文献

1
A Glove-Wearing Detection Algorithm Based on Improved YOLOv8.基于改进 YOLOv8 的戴手套检测算法。
Sensors (Basel). 2023 Dec 18;23(24):9906. doi: 10.3390/s23249906.