Zhang Yuhang, Shi Nan, Zhang Hao, Zhang Jun, Fan Xiaofei, Suo Xuesong
College of Mechanical and Electrical Engineering, Hebei Agricultural University, Baoding, China.
Key Laboratory of Microbial Diversity Research and Application of Hebei Province, College of Life Sciences, Hebei University, Baoding, China.
Front Plant Sci. 2022 Oct 19;13:914829. doi: 10.3389/fpls.2022.914829. eCollection 2022.
The 'Huangguan' pear disease spot detection and grading is the key to fruit processing automation. Due to the variety of individual shapes and disease spot types of 'Huangguan' pear. The traditional computer vision technology and pattern recognition methods have some limitations in the detection of 'Huangguan' pear diseases. In recent years, with the development of deep learning technology and convolutional neural network provides a new solution for the fast and accurate detection of 'Huangguan' pear diseases. To achieve automatic grading of 'Huangguan' pear appearance quality in a complex context, this study proposes an integrated framework combining instance segmentation, semantic segmentation and grading models. In the first stage, Mask R-CNN and Mask R-CNN with the introduction of the preprocessing module are used to segment 'Huangguan' pears from complex backgrounds. In the second stage, DeepLabV3+, UNet and PSPNet are used to segment the 'Huangguan' pear spots to get the spots, and the ratio of the spot pixel area to the 'Huangguan' pear pixel area is calculated and classified into three grades. In the third stage, the grades of 'Huangguan' pear are obtained using ResNet50, VGG16 and MobileNetV3. The experimental results show that the model proposed in this paper can segment the 'Huangguan' pear and disease spots in complex background in steps, and complete the grading of 'Huangguan' pear fruit disease severity. According to the experimental results. The Mask R-CNN that introduced the CLAHE preprocessing module in the first-stage instance segmentation model is the most accurate. The resulting pixel accuracy (PA) is 97.38% and the Dice coefficient is 68.08%. DeepLabV3+ is the most accurate in the second-stage semantic segmentation model. The pixel accuracy is 94.03% and the Dice coefficient is 67.25%. ResNet50 is the most accurate among the third-stage classification models. The average precision (AP) was 97.41% and the F1 (harmonic average assessment) was 95.43%.In short, it not only provides a new framework for the detection and identification of 'Huangguan' pear fruit diseases in complex backgrounds, but also lays a theoretical foundation for the assessment and grading of 'Huangguan' pear diseases.
“皇冠”梨病斑检测与分级是水果加工自动化的关键。由于“皇冠”梨个体形状和病斑类型多样,传统的计算机视觉技术和模式识别方法在“皇冠”梨病害检测中存在一定局限性。近年来,随着深度学习技术的发展,卷积神经网络为“皇冠”梨病害的快速准确检测提供了新的解决方案。为了在复杂背景下实现“皇冠”梨外观品质的自动分级,本研究提出了一种结合实例分割、语义分割和分级模型的集成框架。在第一阶段,使用Mask R-CNN和引入预处理模块的Mask R-CNN从复杂背景中分割出“皇冠”梨。在第二阶段,使用DeepLabV3+、UNet和PSPNet分割“皇冠”梨病斑以获取病斑,并计算病斑像素面积与“皇冠”梨像素面积的比值,分为三个等级。在第三阶段,使用ResNet50、VGG16和MobileNetV3获得“皇冠”梨的等级。实验结果表明,本文提出的模型能够逐步分割复杂背景下的“皇冠”梨和病斑,并完成“皇冠”梨果实病害严重程度的分级。根据实验结果,第一阶段实例分割模型中引入CLAHE预处理模块的Mask R-CNN最为准确,得到的像素精度(PA)为97.38%,Dice系数为68.08%。第二阶段语义分割模型中DeepLabV3+最为准确,像素精度为94.03%,Dice系数为67.25%。第三阶段分类模型中ResNet50最为准确,平均精度(AP)为97.41%,F1(调和平均评估)为95.43%。总之,它不仅为复杂背景下“皇冠”梨果实病害的检测与识别提供了新的框架,也为“皇冠”梨病害的评估与分级奠定了理论基础。