Qiao Gaofei, Zhang Zhibin, Niu Bin, Han Sijia, Yang Enhui
Key Laboratory of Wireless Networks and Mobile Computing, School of Computer Science, Inner Mongolia University, Hohhot, China.
Front Plant Sci. 2025 Mar 27;16:1491170. doi: 10.3389/fpls.2025.1491170. eCollection 2025.
High-quality 3D reconstruction and accurate 3D organ segmentation of plants are crucial prerequisites for automatically extracting phenotypic traits. In this study, we first extract a dense point cloud from implicit representations, which derives from reconstructing the maize plants in 3D by using the Nerfacto neural radiance field model. Second, we propose a lightweight point cloud segmentation network (PointSegNet) specifically for stem and leaf segmentation. This network includes a Global-Local Set Abstraction (GLSA) module to integrate local and global features and an Edge-Aware Feature Propagation (EAFP) module to enhance edge-awareness. Experimental results show that our PointSegNet achieves impressive performance compared to five other state-of-the-art deep learning networks, reaching 93.73%, 97.25%, 96.21%, and 96.73% in terms of mean Intersection over Union (mIoU), precision, recall, and F1-score, respectively. Even when dealing with tomato and soybean plants, with complex structures, our PointSegNet also achieves the best metrics. Meanwhile, based on the principal component analysis (PCA), we further optimize the method to obtain the parameters such as leaf length and leaf width by using PCA principal vectors. Finally, the maize stem thickness, stem height, leaf length, and leaf width obtained from our measurements are compared with the manual test results, yielding values of 0.99, 0.84, 0.94, and 0.87, respectively. These results indicate that our method has high accuracy and reliability for phenotypic parameter extraction. This study throughout the entire process from 3D reconstruction of maize plants to point cloud segmentation and phenotypic parameter extraction, provides a reliable and objective method for acquiring plant phenotypic parameters and will boost plant phenotypic development in smart agriculture.
植物的高质量三维重建和精确的三维器官分割是自动提取表型特征的关键前提。在本研究中,我们首先从隐式表示中提取密集点云,该隐式表示来自于使用Nerfacto神经辐射场模型对玉米植株进行三维重建。其次,我们提出了一种专门用于茎和叶分割的轻量级点云分割网络(PointSegNet)。该网络包括一个全局-局部集合抽象(GLSA)模块以整合局部和全局特征,以及一个边缘感知特征传播(EAFP)模块以增强边缘感知。实验结果表明,与其他五个先进的深度学习网络相比,我们的PointSegNet取得了令人印象深刻的性能,在平均交并比(mIoU)、精度、召回率和F1分数方面分别达到了93.73%、97.25%、96.21%和96.73%。即使处理结构复杂的番茄和大豆植株,我们的PointSegNet也能取得最佳指标。同时,基于主成分分析(PCA),我们进一步优化该方法,利用PCA主向量获得叶长和叶宽等参数。最后,将我们测量得到的玉米茎粗、茎高、叶长和叶宽与人工测试结果进行比较,得到的值分别为0.99、0.84、0.94和0.87。这些结果表明,我们的方法在表型参数提取方面具有很高的准确性和可靠性。本研究从玉米植株的三维重建到点云分割再到表型参数提取的整个过程,为获取植物表型参数提供了一种可靠且客观的方法,并将推动智能农业中植物表型的发展。