Suppr超能文献

通过对收获的野生蓝莓(越桔)深度图分析利用二维神经网络框架进行三维分割

Exploiting 2D Neural Network Frameworks for 3D Segmentation Through Depth Map Analytics of Harvested Wild Blueberries ( Ait.).

作者信息

Mullins Connor C, Esau Travis J, Zaman Qamar U, Al-Mallahi Ahmad A, Farooque Aitazaz A

机构信息

Department of Engineering, Faculty of Agriculture, Dalhousie University, Truro, NS B2N 5E3, Canada.

Faculty of Sustainable Design Engineering, University of Prince Edward Island, Charlottetown, PE C1A 4P3, Canada.

出版信息

J Imaging. 2024 Dec 15;10(12):324. doi: 10.3390/jimaging10120324.

Abstract

This study introduced a novel approach to 3D image segmentation utilizing a neural network framework applied to 2D depth map imagery, with Z axis values visualized through color gradation. This research involved comprehensive data collection from mechanically harvested wild blueberries to populate 3D and red-green-blue (RGB) images of filled totes through time-of-flight and RGB cameras, respectively. Advanced neural network models from the YOLOv8 and Detectron2 frameworks were assessed for their segmentation capabilities. Notably, the YOLOv8 models, particularly YOLOv8n-seg, demonstrated superior processing efficiency, with an average time of 18.10 ms, significantly faster than the Detectron2 models, which exceeded 57 ms, while maintaining high performance with a mean intersection over union (IoU) of 0.944 and a Matthew's correlation coefficient (MCC) of 0.957. A qualitative comparison of segmentation masks indicated that the YOLO models produced smoother and more accurate object boundaries, whereas Detectron2 showed jagged edges and under-segmentation. Statistical analyses, including ANOVA and Tukey's HSD test (α = 0.05), confirmed the superior segmentation performance of models on depth maps over RGB images ( < 0.001). This study concludes by recommending the YOLOv8n-seg model for real-time 3D segmentation in precision agriculture, providing insights that can enhance volume estimation, yield prediction, and resource management practices.

摘要

本研究引入了一种新颖的三维图像分割方法,该方法利用应用于二维深度图图像的神经网络框架,通过颜色渐变可视化Z轴值。本研究涉及从机械采收的野生蓝莓中进行全面的数据收集,分别通过飞行时间相机和RGB相机获取装满蓝莓的容器的三维图像和红-绿-蓝(RGB)图像。对YOLOv8和Detectron2框架中的先进神经网络模型的分割能力进行了评估。值得注意的是,YOLOv8模型,特别是YOLOv8n-seg,表现出卓越的处理效率,平均时间为18.10毫秒,明显快于Detectron2模型(超过57毫秒),同时保持了高性能,平均交并比(IoU)为0.944,马修斯相关系数(MCC)为0.957。分割掩码的定性比较表明,YOLO模型生成的物体边界更平滑、更准确,而Detectron2显示出边缘参差不齐和分割不足的情况。包括方差分析和Tukey's HSD检验(α = 0.05)在内的统计分析证实,模型在深度图上的分割性能优于RGB图像(< 0.001)。本研究最后推荐将YOLOv8n-seg模型用于精准农业中的实时三维分割,为提高体积估计、产量预测和资源管理实践提供见解。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4177/11676057/79bf144f8ae4/jimaging-10-00324-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验