Nan Fei, Song Yang, Yu Xun, Nie Chenwei, Liu Yadong, Bai Yali, Zou Dongxiao, Wang Chao, Yin Dameng, Yang Wude, Jin Xiuliang
College of Agriculture, Shanxi Agricultural University, Taigu, Shanxi, China.
Institute of Crop Sciences, Chinese Academy of Agricultural Sciences/Key Laboratory of Crop Physiology and Ecology, Ministry of Agriculture, Beijing, China.
Front Plant Sci. 2023 Sep 26;14:1268015. doi: 10.3389/fpls.2023.1268015. eCollection 2023.
Maize ( L.) is one of the most important crops, influencing food production and even the whole industry. In recent years, global crop production has been facing great challenges from diseases. However, most of the traditional methods make it difficult to efficiently identify disease-related phenotypes in germplasm resources, especially in actual field environments. To overcome this limitation, our study aims to evaluate the potential of the multi-sensor synchronized RGB-D camera with depth information for maize leaf disease classification. We distinguished maize leaves from the background based on the RGB-D depth information to eliminate interference from complex field environments. Four deep learning models (i.e., Resnet50, MobilenetV2, Vgg16, and Efficientnet-B3) were used to classify three main types of maize diseases, i.e., the curvularia leaf spot [ (Wakker) Boedijn], the small spot [ (Nishik.) Shoemaker], and the mixed spot diseases. We finally compared the pre-segmentation and post-segmentation results to test the robustness of the above models. Our main findings are: 1) The maize disease classification models based on the pre-segmentation image data performed slightly better than the ones based on the post-segmentation image data. 2) The pre-segmentation models overestimated the accuracy of disease classification due to the complexity of the background, but post-segmentation models focusing on leaf disease features provided more practical results with shorter prediction times. 3) Among the post-segmentation models, the Resnet50 and MobilenetV2 models showed similar accuracy and were better than the Vgg16 and Efficientnet-B3 models, and the MobilenetV2 model performed better than the other three models in terms of the size and the single image prediction time. Overall, this study provides a novel method for maize leaf disease classification using the post-segmentation image data from a multi-sensor synchronized RGB-D camera and offers the possibility of developing relevant portable devices.
玉米(L.)是最重要的作物之一,影响着粮食生产乃至整个产业。近年来,全球作物生产一直面临着来自病害的巨大挑战。然而,大多数传统方法难以在种质资源中高效识别与病害相关的表型,尤其是在实际田间环境中。为克服这一局限,我们的研究旨在评估具有深度信息的多传感器同步RGB-D相机用于玉米叶部病害分类的潜力。我们基于RGB-D深度信息将玉米叶片与背景区分开来,以消除复杂田间环境的干扰。使用四种深度学习模型(即Resnet50、MobilenetV2、Vgg16和Efficientnet-B3)对三种主要类型的玉米病害进行分类,即弯孢叶斑病[(Wakker)Boedijn]、小斑病[(Nishik.)Shoemaker]和混合斑病。我们最终比较了分割前和分割后的结果,以测试上述模型的稳健性。我们的主要发现如下:1)基于分割前图像数据的玉米病害分类模型表现略优于基于分割后图像数据的模型。2)由于背景的复杂性,分割前模型高估了病害分类的准确性,但专注于叶部病害特征的分割后模型提供了更具实用性的结果,且预测时间更短。3)在分割后模型中,Resnet50和MobilenetV2模型表现出相似的准确性,优于Vgg16和Efficientnet-B3模型,并且MobilenetV2模型在尺寸和单图像预测时间方面比其他三个模型表现更好。总体而言,本研究提供了一种利用多传感器同步RGB-D相机的分割后图像数据进行玉米叶部病害分类的新方法,并为开发相关便携式设备提供了可能性。