Du Jianjun, Lu Xianju, Fan Jiangchuan, Qin Yajuan, Yang Xiaozeng, Guo Xinyu
Beijing Academy of Agriculture and Forestry Sciences, Beijing, China.
Beijing Key Lab of Digital Plant, Beijing Research Center for Information Technology in Agriculture, Beijing, China.
Front Plant Sci. 2020 Oct 6;11:563386. doi: 10.3389/fpls.2020.563386. eCollection 2020.
The yield and quality of fresh lettuce can be determined from the growth rate and color of individual plants. Manual assessment and phenotyping for hundreds of varieties of lettuce is very time consuming and labor intensive. In this study, we utilized a "Sensor-to-Plant" greenhouse phenotyping platform to periodically capture top-view images of lettuce, and datasets of over 2000 plants from 500 lettuce varieties were thus captured at eight time points during vegetative growth. Here, we present a novel object detection-semantic segmentation-phenotyping method based on convolutional neural networks (CNNs) to conduct non-invasive and high-throughput phenotyping of the growth and development status of multiple lettuce varieties. Multistage CNN models for object detection and semantic segmentation were integrated to bridge the gap between image capture and plant phenotyping. An object detection model was used to detect and identify each pot from the sequence of images with 99.82% accuracy, semantic segmentation model was utilized to segment and identify each lettuce plant with a 97.65% F1 score, and a phenotyping pipeline was utilized to extract a total of 15 static traits (related to geometry and color) of each lettuce plant. Furthermore, the dynamic traits (growth and accumulation rates) were calculated based on the changing curves of static traits at eight growth points. The correlation and descriptive ability of these static and dynamic traits were carefully evaluated for the interpretability of traits related to digital biomass and quality of lettuce, and the observed accumulation rates of static straits more accurately reflected the growth status of lettuce plants. Finally, we validated the application of image-based high-throughput phenotyping through geometric measurement and color grading for a wide range of lettuce varieties. The proposed method can be extended to crops such as maize, wheat, and soybean as a non-invasive means of phenotype evaluation and identification.
新鲜生菜的产量和品质可通过单株植物的生长速率和颜色来确定。对数百种生菜品种进行人工评估和表型分析非常耗时且 labor intensive。在本研究中,我们利用一个“传感器到植物”的温室表型分析平台定期采集生菜的顶视图图像,从而在营养生长阶段的八个时间点采集了来自500个生菜品种的2000多株植物的数据集。在此,我们提出了一种基于卷积神经网络(CNN)的新颖目标检测 - 语义分割 - 表型分析方法,用于对多个生菜品种的生长发育状况进行非侵入性高通量表型分析。集成了用于目标检测和语义分割的多阶段CNN模型,以弥合图像采集与植物表型分析之间的差距。使用目标检测模型从图像序列中检测和识别每个花盆,准确率达99.82%,利用语义分割模型以97.65%的F1分数分割和识别每株生菜植物,并利用一个表型分析流程提取每株生菜植物总共15个静态特征(与几何形状和颜色相关)。此外,基于八个生长点的静态特征变化曲线计算动态特征(生长和积累速率)。仔细评估了这些静态和动态特征的相关性和描述能力,以解释与生菜数字生物量和品质相关的特征,并且观察到的静态特征积累速率更准确地反映了生菜植物的生长状况。最后,我们通过对多种生菜品种进行几何测量和颜色分级,验证了基于图像的高通量表型分析的应用。所提出的方法可以扩展到玉米、小麦和大豆等作物,作为一种非侵入性的表型评估和识别手段。