Key Laboratory of Digital Signal and Image Processing of Guangdong Province, Shantou University, Shantou 515063, China.
Key Laboratory of Agricultural Remote Sensing, Ministry of Agriculture and Rural Affairs/Institute of Agricultural Resources and Regional Planning, Chinese Academy of Agricultural Sciences, Beijing 100081, China.
Sensors (Basel). 2021 Sep 30;21(19):6540. doi: 10.3390/s21196540.
Yellow rust is a disease with a wide range that causes great damage to wheat. The traditional method of manually identifying wheat yellow rust is very inefficient. To improve this situation, this study proposed a deep-learning-based method for identifying wheat yellow rust from unmanned aerial vehicle (UAV) images. The method was based on the pyramid scene parsing network (PSPNet) semantic segmentation model to classify healthy wheat, yellow rust wheat, and bare soil in small-scale UAV images, and to investigate the spatial generalization of the model. In addition, it was proposed to use the high-accuracy classification results of traditional algorithms as weak samples for wheat yellow rust identification. The recognition accuracy of the PSPNet model in this study reached 98%. On this basis, this study used the trained semantic segmentation model to recognize another wheat field. The results showed that the method had certain generalization ability, and its accuracy reached 98%. In addition, the high-accuracy classification result of a support vector machine was used as a weak label by weak supervision, which better solved the labeling problem of large-size images, and the final recognition accuracy reached 94%. Therefore, the present study method facilitated timely control measures to reduce economic losses.
黄锈病是一种分布广泛的疾病,会对小麦造成严重损害。传统的人工识别小麦黄锈病的方法效率非常低。为了改善这种情况,本研究提出了一种基于深度学习的方法,用于从小型无人机 (UAV) 图像中识别小麦黄锈病。该方法基于金字塔场景解析网络 (PSPNet) 语义分割模型,对小型无人机图像中的健康小麦、黄锈病小麦和裸土进行分类,并研究模型的空间泛化能力。此外,还提出了使用传统算法的高精度分类结果作为小麦黄锈病识别的弱样本。本研究中 PSPNet 模型的识别准确率达到 98%。在此基础上,本研究使用训练好的语义分割模型对另一片麦田进行识别。结果表明,该方法具有一定的泛化能力,准确率达到 98%。此外,通过弱监督使用支持向量机的高精度分类结果作为弱标签,更好地解决了大尺寸图像的标注问题,最终识别准确率达到 94%。因此,本研究方法有助于及时采取控制措施,减少经济损失。