Suppr超能文献

基于单目实例分割的行间未切割杂草检测在果园种植中的应用。

Intrarow Uncut Weed Detection Using You-Only-Look-Once Instance Segmentation for Orchard Plantations.

机构信息

Graduate School of Science and Technology, University of Tsukuba, 1-1-1 Tennodai, Tsukuba 305-8577, Japan.

Department of Agricultural and Biosystem Engineering, Universitas Padjadjaran, Jatinangor, Sumedang 45363, Indonesia.

出版信息

Sensors (Basel). 2024 Jan 30;24(3):893. doi: 10.3390/s24030893.

Abstract

Mechanical weed management is a drudging task that requires manpower and has risks when conducted within rows of orchards. However, intrarow weeding must still be conducted by manual labor due to the restricted movements of riding mowers within the rows of orchards due to their confined structures with nets and poles. However, autonomous robotic weeders still face challenges identifying uncut weeds due to the obstruction of Global Navigation Satellite System (GNSS) signals caused by poles and tree canopies. A properly designed intelligent vision system would have the potential to achieve the desired outcome by utilizing an autonomous weeder to perform operations in uncut sections. Therefore, the objective of this study is to develop a vision module using a custom-trained dataset on YOLO instance segmentation algorithms to support autonomous robotic weeders in recognizing uncut weeds and obstacles (i.e., fruit tree trunks, fixed poles) within rows. The training dataset was acquired from a pear orchard located at the Tsukuba Plant Innovation Research Center (T-PIRC) at the University of Tsukuba, Japan. In total, 5000 images were preprocessed and labeled for training and testing using YOLO models. Four versions of edge-device-dedicated YOLO instance segmentation were utilized in this research-YOLOv5n-seg, YOLOv5s-seg, YOLOv8n-seg, and YOLOv8s-seg-for real-time application with an autonomous weeder. A comparison study was conducted to evaluate all YOLO models in terms of detection accuracy, model complexity, and inference speed. The smaller YOLOv5-based and YOLOv8-based models were found to be more efficient than the larger models, and YOLOv8n-seg was selected as the vision module for the autonomous weeder. In the evaluation process, YOLOv8n-seg had better segmentation accuracy than YOLOv5n-seg, while the latter had the fastest inference time. The performance of YOLOv8n-seg was also acceptable when it was deployed on a resource-constrained device that is appropriate for robotic weeders. The results indicated that the proposed deep learning-based detection accuracy and inference speed can be used for object recognition via edge devices for robotic operation during intrarow weeding operations in orchards.

摘要

机械除草是一项艰苦的任务,需要人力,并且在果园行间进行时存在风险。然而,由于骑乘式割草机在结构上受到限制,无法在果园行间自由移动,因此仍需要人工进行行间除草。但是,由于全球导航卫星系统(GNSS)信号被电线杆和树冠遮挡,自主除草机在识别未割杂草时仍然面临挑战。一个设计合理的智能视觉系统有可能通过利用自主除草机在未割草区域进行操作来实现预期的效果。因此,本研究的目的是开发一个使用 YOLO 实例分割算法的自定义训练数据集的视觉模块,以支持自主除草机识别行间未割杂草和障碍物(即果树树干、固定电线杆)。训练数据集是从日本筑波大学筑波植物创新研究中心(T-PIRC)的一个梨园获得的。总共预处理了 5000 张图像,并使用 YOLO 模型进行了训练和测试的标签。在本研究中,使用了四个版本的边缘设备专用 YOLO 实例分割算法-YOLOv5n-seg、YOLOv5s-seg、YOLOv8n-seg 和 YOLOv8s-seg-用于与自主除草机实时应用。进行了一项比较研究,以评估所有 YOLO 模型在检测精度、模型复杂性和推理速度方面的表现。较小的基于 YOLOv5 和 YOLOv8 的模型比较大的模型更高效,并且选择 YOLOv8n-seg 作为自主除草机的视觉模块。在评估过程中,YOLOv8n-seg 的分割精度优于 YOLOv5n-seg,而后者的推理时间最快。当部署在适合除草机器人的资源受限设备上时,YOLOv8n-seg 的性能也可以接受。结果表明,所提出的基于深度学习的检测精度和推理速度可用于通过边缘设备进行物体识别,以便在果园行间除草作业中进行机器人操作。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/db45/10857644/0974d92455fa/sensors-24-00893-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验