Suppr超能文献

利用深度相机进行豆荚计数的新方法,以支持大豆育种应用。

A Novel Approach to Pod Count Estimation Using a Depth Camera in Support of Soybean Breeding Applications.

机构信息

Agricultural and Biosystems Engineering Department, North Dakota State University, Fargo, ND 58105, USA.

Department of Plant Sciences, North Dakota State University, Fargo, ND 58105, USA.

出版信息

Sensors (Basel). 2023 Jul 18;23(14):6506. doi: 10.3390/s23146506.

Abstract

Improving soybean ( L. (Merr.)) yield is crucial for strengthening national food security. Predicting soybean yield is essential to maximize the potential of crop varieties. Non-destructive methods are needed to estimate yield before crop maturity. Various approaches, including the pod-count method, have been used to predict soybean yield, but they often face issues with the crop background color. To address this challenge, we explored the application of a depth camera to real-time filtering of RGB images, aiming to enhance the performance of the pod-counting classification model. Additionally, this study aimed to compare object detection models (YOLOV7 and YOLOv7-E6E) and select the most suitable deep learning (DL) model for counting soybean pods. After identifying the best architecture, we conducted a comparative analysis of the model's performance by training the DL model with and without background removal from images. Results demonstrated that removing the background using a depth camera improved YOLOv7's pod detection performance by 10.2% precision, 16.4% recall, 13.8% mAP@50, and 17.7% mAP@0.5:0.95 score compared to when the background was present. Using a depth camera and the YOLOv7 algorithm for pod detection and counting yielded a mAP@0.5 of 93.4% and mAP@0.5:0.95 of 83.9%. These results indicated a significant improvement in the DL model's performance when the background was segmented, and a reasonably larger dataset was used to train YOLOv7.

摘要

提高大豆(L. (Merr.))产量对于加强国家粮食安全至关重要。预测大豆产量对于最大限度地发挥作物品种的潜力至关重要。需要在作物成熟前使用非破坏性方法来估计产量。已经使用了各种方法,包括荚计数法,来预测大豆产量,但它们通常面临作物背景颜色的问题。为了解决这个挑战,我们探索了使用深度相机实时过滤 RGB 图像的应用,旨在增强荚计数分类模型的性能。此外,本研究旨在比较目标检测模型(YOLOV7 和 YOLOv7-E6E)并选择最适合计数大豆荚的深度学习(DL)模型。在确定最佳架构后,我们通过对具有和不具有图像背景去除的 DL 模型进行训练,对模型的性能进行了比较分析。结果表明,与存在背景时相比,使用深度相机去除背景可将 YOLOv7 的荚检测性能提高 10.2%的精度、16.4%的召回率、13.8%的 mAP@50 和 17.7%的 mAP@0.5:0.95 分数。使用深度相机和 YOLOv7 算法进行荚检测和计数可实现 mAP@0.5 为 93.4%和 mAP@0.5:0.95 为 83.9%。这些结果表明,当背景被分割并且使用更大的数据集来训练 YOLOv7 时,DL 模型的性能有了显著提高。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/2787/10384073/6d45a3462257/sensors-23-06506-g001a.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验