• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于番茄植株部分语义分割的三维数据增强方法

3D data-augmentation methods for semantic segmentation of tomato plant parts.

作者信息

Xin Bolai, Sun Ji, Bartholomeus Harm, Kootstra Gert

机构信息

Department of Plant Science, Wageningen University and Research, Wageningen, Netherlands.

Laboratory of Geo-Information Science and Remote Sensing, Wageningen University and Research, Wageningen, Netherlands.

出版信息

Front Plant Sci. 2023 Jun 12;14:1045545. doi: 10.3389/fpls.2023.1045545. eCollection 2023.

DOI:10.3389/fpls.2023.1045545
PMID:37377799
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10291624/
Abstract

INTRODUCTION

3D semantic segmentation of plant point clouds is an important step towards automatic plant phenotyping and crop modeling. Since traditional hand-designed methods for point-cloud processing face challenges in generalisation, current methods are based on deep neural network that learn to perform the 3D segmentation based on training data. However, these methods require a large annotated training set to perform well. Especially for 3D semantic segmentation, the collection of training data is highly labour intensitive and time consuming. Data augmentation has been shown to improve training on small training sets. However, it is unclear which data-augmentation methods are effective for 3D plant-part segmentation.

METHODS

In the proposed work, five novel data-augmentation methods (global cropping, brightness adjustment, leaf translation, leaf rotation, and leaf crossover) were proposed and compared to five existing methods (online down sampling, global jittering, global scaling, global rotation, and global translation). The methods were applied to PointNet++ for 3D semantic segmentation of the point clouds of three cultivars of tomato plants (Merlice, Brioso, and Gardener Delight). The point clouds were segmented into soil base, stick, stemwork, and other bio-structures.

RESULTS AND DISCCUSION

Among the data augmentation methods being proposed in this paper, leaf crossover indicated the most promising result which outperformed the existing ones. Leaf rotation (around Z axis), leaf translation, and cropping also performed well on the 3D tomato plant point clouds, which outperformed most of the existing work apart from global jittering. The proposed 3D data augmentation approaches significantly improve the overfitting caused by the limited training data. The improved plant-part segmentation further enables a more accurate reconstruction of the plant architecture.

摘要

引言

植物点云的三维语义分割是迈向自动植物表型分析和作物建模的重要一步。由于传统的手工设计的点云处理方法在泛化方面面临挑战,当前的方法基于深度神经网络,该网络通过训练数据学习执行三维分割。然而,这些方法需要大量带注释的训练集才能表现良好。特别是对于三维语义分割,训练数据的收集非常耗费人力且耗时。数据增强已被证明可以改善小训练集上的训练效果。然而,尚不清楚哪些数据增强方法对三维植物部分分割有效。

方法

在本研究中,提出了五种新颖的数据增强方法(全局裁剪、亮度调整、叶片平移、叶片旋转和叶片交叉),并与五种现有方法(在线下采样、全局抖动、全局缩放、全局旋转和全局平移)进行了比较。这些方法应用于PointNet++,用于对三个番茄品种(Merlice、Brioso和Gardener Delight)的点云进行三维语义分割。点云被分割为土壤基部、茎杆、茎干和其他生物结构。

结果与讨论

在本文提出的数据增强方法中,叶片交叉显示出最有前景的结果,优于现有方法。叶片旋转(绕Z轴)、叶片平移和裁剪在三维番茄植物点云上也表现良好,除了全局抖动外,优于大多数现有工作。所提出的三维数据增强方法显著改善了由有限训练数据导致的过拟合。改进的植物部分分割进一步实现了对植物结构更准确的重建。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1f18/10291624/f84c3500cd31/fpls-14-1045545-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1f18/10291624/9ac1abdba8a2/fpls-14-1045545-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1f18/10291624/b672c6c0453c/fpls-14-1045545-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1f18/10291624/7c121dddf52c/fpls-14-1045545-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1f18/10291624/5c0b5ccc09db/fpls-14-1045545-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1f18/10291624/7634f13cc0ca/fpls-14-1045545-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1f18/10291624/24b17354d6e3/fpls-14-1045545-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1f18/10291624/ef3fb535a369/fpls-14-1045545-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1f18/10291624/707310c58d21/fpls-14-1045545-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1f18/10291624/fca8a993075d/fpls-14-1045545-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1f18/10291624/adfe436fde60/fpls-14-1045545-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1f18/10291624/b49c8589cd21/fpls-14-1045545-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1f18/10291624/302b6268ec88/fpls-14-1045545-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1f18/10291624/b936141a317d/fpls-14-1045545-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1f18/10291624/f84c3500cd31/fpls-14-1045545-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1f18/10291624/9ac1abdba8a2/fpls-14-1045545-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1f18/10291624/b672c6c0453c/fpls-14-1045545-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1f18/10291624/7c121dddf52c/fpls-14-1045545-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1f18/10291624/5c0b5ccc09db/fpls-14-1045545-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1f18/10291624/7634f13cc0ca/fpls-14-1045545-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1f18/10291624/24b17354d6e3/fpls-14-1045545-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1f18/10291624/ef3fb535a369/fpls-14-1045545-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1f18/10291624/707310c58d21/fpls-14-1045545-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1f18/10291624/fca8a993075d/fpls-14-1045545-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1f18/10291624/adfe436fde60/fpls-14-1045545-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1f18/10291624/b49c8589cd21/fpls-14-1045545-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1f18/10291624/302b6268ec88/fpls-14-1045545-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1f18/10291624/b936141a317d/fpls-14-1045545-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1f18/10291624/f84c3500cd31/fpls-14-1045545-g014.jpg

相似文献

1
3D data-augmentation methods for semantic segmentation of tomato plant parts.用于番茄植株部分语义分割的三维数据增强方法
Front Plant Sci. 2023 Jun 12;14:1045545. doi: 10.3389/fpls.2023.1045545. eCollection 2023.
2
Segmentation of structural parts of rosebush plants with 3D point-based deep learning methods.基于3D点的深度学习方法对玫瑰丛植物结构部分的分割
Plant Methods. 2022 Feb 20;18(1):20. doi: 10.1186/s13007-022-00857-3.
3
PSegNet: Simultaneous Semantic and Instance Segmentation for Point Clouds of Plants.PSegNet:用于植物点云的同步语义分割和实例分割
Plant Phenomics. 2022 May 23;2022:9787643. doi: 10.34133/2022/9787643. eCollection 2022.
4
A graph-based approach for simultaneous semantic and instance segmentation of plant 3D point clouds.一种基于图的方法用于植物三维点云的语义和实例同时分割。
Front Plant Sci. 2022 Nov 10;13:1012669. doi: 10.3389/fpls.2022.1012669. eCollection 2022.
5
FF-Net: Feature-Fusion-Based Network for Semantic Segmentation of 3D Plant Point Cloud.FF-Net:基于特征融合的三维植物点云语义分割网络
Plants (Basel). 2023 May 1;12(9):1867. doi: 10.3390/plants12091867.
6
Automatic Branch-Leaf Segmentation and Leaf Phenotypic Parameter Estimation of Pear Trees Based on Three-Dimensional Point Clouds.基于三维点云的梨树自动枝干叶分割及叶性状参数估算
Sensors (Basel). 2023 May 8;23(9):4572. doi: 10.3390/s23094572.
7
A comparative study on point cloud down-sampling strategies for deep learning-based crop organ segmentation.基于深度学习的作物器官分割中点云下采样策略的比较研究
Plant Methods. 2023 Nov 11;19(1):124. doi: 10.1186/s13007-023-01099-7.
8
FWNet: Semantic Segmentation for Full-Waveform LiDAR Data Using Deep Learning.FWNet:使用深度学习对全波形激光雷达数据进行语义分割
Sensors (Basel). 2020 Jun 24;20(12):3568. doi: 10.3390/s20123568.
9
Cotton plant part 3D segmentation and architectural trait extraction using point voxel convolutional neural networks.利用点体素卷积神经网络进行棉花植株部分的三维分割和结构特征提取。
Plant Methods. 2023 Mar 30;19(1):33. doi: 10.1186/s13007-023-00996-1.
10
Improved Point-Cloud Segmentation for Plant Phenotyping Through Class-Dependent Sampling of Training Data to Battle Class Imbalance.通过对训练数据进行类别相关采样来对抗类别不平衡,改进用于植物表型分析的点云分割
Front Plant Sci. 2022 Mar 28;13:838190. doi: 10.3389/fpls.2022.838190. eCollection 2022.

引用本文的文献

1
Mapping of cotton bolls and branches with high-granularity through point cloud segmentation.通过点云分割实现棉铃和棉枝的高分辨率映射。
Plant Methods. 2025 May 20;21(1):66. doi: 10.1186/s13007-025-01375-8.

本文引用的文献

1
Improved Point-Cloud Segmentation for Plant Phenotyping Through Class-Dependent Sampling of Training Data to Battle Class Imbalance.通过对训练数据进行类别相关采样来对抗类别不平衡,改进用于植物表型分析的点云分割
Front Plant Sci. 2022 Mar 28;13:838190. doi: 10.3389/fpls.2022.838190. eCollection 2022.
2
Segmentation of structural parts of rosebush plants with 3D point-based deep learning methods.基于3D点的深度学习方法对玫瑰丛植物结构部分的分割
Plant Methods. 2022 Feb 20;18(1):20. doi: 10.1186/s13007-022-00857-3.
3
Pheno4D: A spatio-temporal dataset of maize and tomato plant point clouds for phenotyping and advanced plant analysis.
Pheno4D:一个玉米和番茄植物点云的时空数据集,用于表型分析和高级植物分析。
PLoS One. 2021 Aug 18;16(8):e0256340. doi: 10.1371/journal.pone.0256340. eCollection 2021.
4
Registration of spatio-temporal point clouds of plants for phenotyping.植物时空点云配准用于表型分析。
PLoS One. 2021 Feb 25;16(2):e0247243. doi: 10.1371/journal.pone.0247243. eCollection 2021.
5
Machine learning in plant science and plant breeding.植物科学与植物育种中的机器学习
iScience. 2020 Dec 5;24(1):101890. doi: 10.1016/j.isci.2020.101890. eCollection 2021 Jan 22.
6
Deep Learning for 3D Point Clouds: A Survey.用于三维点云的深度学习:综述
IEEE Trans Pattern Anal Mach Intell. 2021 Dec;43(12):4338-4364. doi: 10.1109/TPAMI.2020.3005434. Epub 2021 Nov 3.
7
Automatic Leaf Segmentation for Estimating Leaf Area and Leaf Inclination Angle in 3D Plant Images.自动叶分割估计三维植物图像中的叶面积和叶倾角。
Sensors (Basel). 2018 Oct 22;18(10):3576. doi: 10.3390/s18103576.
8
A Robotic Platform for Corn Seedling Morphological Traits Characterization.用于玉米幼苗形态特征表征的机器人平台
Sensors (Basel). 2017 Sep 12;17(9):2082. doi: 10.3390/s17092082.
9
Accuracy analysis of a multi-view stereo approach for phenotyping of tomato plants at the organ level.用于番茄植株器官水平表型分析的多视角立体视觉方法的准确性分析
Sensors (Basel). 2015 Apr 24;15(5):9651-65. doi: 10.3390/s150509651.