School of Electrical & Computer Engineering, University of Georgia, Athens, GA 30602, USA.
Sensors (Basel). 2020 Dec 3;20(23):6896. doi: 10.3390/s20236896.
The use of deep neural networks (DNNs) in plant phenotyping has recently received considerable attention. By using DNNs, valuable insights into plant traits can be readily achieved. While these networks have made considerable advances in plant phenotyping, the results are processed too slowly to allow for real-time decision-making. Therefore, being able to perform plant phenotyping computations in real-time has become a critical part of precision agriculture and agricultural informatics. In this work, we utilize state-of-the-art object detection networks to accurately detect, count, and localize plant leaves in real-time. Our work includes the creation of an annotated dataset of plants captured using Cannon Rebel XS camera. These images and annotations have been complied and made publicly available. This dataset is then fed into a Tiny-YOLOv3 network for training. The Tiny-YOLOv3 network is then able to converge and accurately perform real-time localization and counting of the leaves. We also create a simple robotics platform based on an Android phone and iRobot create2 to demonstrate the real-time capabilities of the network in the greenhouse. Additionally, a performance comparison is conducted between Tiny-YOLOv3 and Faster R-CNN. Unlike Tiny-YOLOv3, which is a single network that does localization and identification in a single pass, the Faster R-CNN network requires two steps to do localization and identification. While with Tiny-YOLOv3, inference time, F1 Score, and false positive rate (FPR) are improved compared to Faster R-CNN, other measures such as difference in count (DiC) and AP are worsened. Specifically, for our implementation of Tiny-YOLOv3, the inference time is under 0.01 s, the F1 Score is over 0.94, and the FPR is around 24%. Last, transfer learning using Tiny-YOLOv3 to detect larger leaves on a model trained only on smaller leaves is implemented. The main contributions of the paper are in creating dataset (shared with the research community), as well as the trained Tiny-YOLOv3 network for leaf localization and counting.
近年来,深度学习神经网络(DNN)在植物表型分析中的应用受到了广泛关注。通过使用 DNN,可以轻松地深入了解植物的特征。虽然这些网络在植物表型分析方面取得了重大进展,但结果的处理速度太慢,无法进行实时决策。因此,能够实时进行植物表型分析计算已成为精准农业和农业信息学的关键部分。在这项工作中,我们利用最先进的目标检测网络,实时准确地检测、计数和定位植物叶片。我们的工作包括创建一个使用 Cannon Rebel XS 相机拍摄的植物的标注数据集。这些图像和标注已被整理并公开提供。然后,将这个数据集输入到 Tiny-YOLOv3 网络中进行训练。Tiny-YOLOv3 网络能够收敛,并准确地实时定位和计数叶片。我们还基于 Android 手机和 iRobot create2 机器人创建了一个简单的机器人平台,以展示网络在温室中的实时能力。此外,还对 Tiny-YOLOv3 和 Faster R-CNN 进行了性能比较。与 Tiny-YOLOv3 不同,Tiny-YOLOv3 是一个在单步中进行定位和识别的单一网络,而 Faster R-CNN 网络需要两步来进行定位和识别。虽然在 Tiny-YOLOv3 中,推理时间、F1 分数和假阳性率(FPR)得到了提高,但与 Faster R-CNN 相比,其他指标如计数差异(DiC)和 AP 则恶化了。具体来说,对于我们的 Tiny-YOLOv3 实现,推理时间不到 0.01 秒,F1 分数超过 0.94,FPR 约为 24%。最后,实现了使用 Tiny-YOLOv3 进行迁移学习,以检测仅在小叶片上训练的模型上的较大叶片。本文的主要贡献在于创建了数据集(与研究社区共享),以及用于叶片定位和计数的训练后的 Tiny-YOLOv3 网络。