Graduate School of Science and Technology, University of Tsukuba, 1-1-1 Tennodai, Tsukuba 305-8577, Japan.
Faculty of Life and Environmental Sciences, University of Tsukuba, 1-1-1 Tennodai, Tsukuba 305-8577, Japan.
Sensors (Basel). 2022 May 31;22(11):4187. doi: 10.3390/s22114187.
In orchard fruit picking systems for pears, the challenge is to identify the full shape of the soft fruit to avoid injuries while using robotic or automatic picking systems. Advancements in computer vision have brought the potential to train for different shapes and sizes of fruit using deep learning algorithms. In this research, a fruit recognition method for robotic systems was developed to identify pears in a complex orchard environment using a 3D stereo camera combined with Mask Region-Convolutional Neural Networks (Mask R-CNN) deep learning technology to obtain targets. This experiment used 9054 RGBA original images (3018 original images and 6036 augmented images) to create a dataset divided into a training, validation, and testing sets. Furthermore, we collected the dataset under different lighting conditions at different times which were high-light (9-10 am) and low-light (6-7 pm) conditions at JST, Tokyo Time, August 2021 (summertime) to prepare training, validation, and test datasets at a ratio of 6:3:1. All the images were taken by a 3D stereo camera which included PERFORMANCE, QUALITY, and ULTRA models. We used the PERFORMANCE model to capture images to make the datasets; the camera on the left generated depth images and the camera on the right generated the original images. In this research, we also compared the performance of different types with the R-CNN model (Mask R-CNN and Faster R-CNN); the mean Average Precisions (mAP) of Mask R-CNN and Faster R-CNN were compared in the same datasets with the same ratio. Each epoch in Mask R-CNN was set at 500 steps with total 80 epochs. And Faster R-CNN was set at 40,000 steps for training. For the recognition of pears, the Mask R-CNN, had the mAPs of 95.22% for validation set and 99.45% was observed for the testing set. On the other hand, mAPs were observed 87.9% in the validation set and 87.52% in the testing set using Faster R-CNN. The different models using the same dataset had differences in performance in gathering clustered pears and individual pear situations. Mask R-CNN outperformed Faster R-CNN when the pears are densely clustered at the complex orchard. Therefore, the 3D stereo camera-based dataset combined with the Mask R-CNN vision algorithm had high accuracy in detecting the individual pears from gathered pears in a complex orchard environment.
在梨树果园采摘系统中,挑战在于识别软水果的完整形状,以避免在使用机器人或自动采摘系统时受伤。计算机视觉的进步带来了使用深度学习算法对不同形状和大小的水果进行训练的潜力。在这项研究中,开发了一种用于机器人系统的水果识别方法,该方法使用 3D 立体相机结合 Mask Region-Convolutional Neural Networks(Mask R-CNN)深度学习技术来获取目标,以识别复杂果园环境中的梨。该实验使用 9054 个 RGBA 原始图像(3018 个原始图像和 6036 个增强图像)创建了一个数据集,该数据集分为训练集、验证集和测试集。此外,我们在不同时间的不同光照条件下收集了数据集,这些数据集是 2021 年 8 月(夏令时)日本时间上午 9-10 点(高光)和下午 6-7 点(低光)的条件。所有图像均由 3D 立体相机拍摄,其中包括 PERFORMANCE、QUALITY 和 ULTRA 型号。我们使用 PERFORMANCE 模型拍摄图像以制作数据集;左侧的相机生成深度图像,右侧的相机生成原始图像。在这项研究中,我们还比较了不同类型的与 R-CNN 模型(Mask R-CNN 和 Faster R-CNN)的性能;在相同的数据集和相同的比例下比较了 Mask R-CNN 和 Faster R-CNN 的平均精度(mAP)。Mask R-CNN 的每个 epoch 设置为 500 步,总共有 80 个 epoch。Faster R-CNN 的训练设置为 40000 步。对于梨的识别,Mask R-CNN 在验证集上的 mAPs 为 95.22%,在测试集上的 mAPs 为 99.45%。另一方面,在验证集和测试集上使用 Faster R-CNN 观察到的 mAPs 分别为 87.9%和 87.52%。使用相同数据集的不同模型在收集成簇的梨和单个梨的情况下性能存在差异。当密集聚集的梨树在复杂果园中时,Mask R-CNN 的性能优于 Faster R-CNN。因此,基于 3D 立体相机的数据集结合 Mask R-CNN 视觉算法在检测复杂果园环境中成堆的单个梨方面具有很高的准确性。