Henke Michael, Junker Astrid, Neumann Kerstin, Altmann Thomas, Gladilin Evgeny
Leibniz Institute of Plant Genetics and Crop Plant Research (IPK), OT Gatersleben, Corrensstrasse 3, 06466 Seeland, Germany.
Plant Methods. 2020 Jul 9;16:95. doi: 10.1186/s13007-020-00637-x. eCollection 2020.
Automated segmentation of large amount of image data is one of the major bottlenecks in high-throughput plant phenotyping. Dynamic optical appearance of developing plants, inhomogeneous scene illumination, shadows and reflections in plant and background regions complicate automated segmentation of unimodal plant images. To overcome the problem of ambiguous color information in unimodal data, images of different modalities can be combined to a virtual multispectral cube. However, due to motion artefacts caused by the relocation of plants between photochambers the alignment of multimodal images is often compromised by blurring artifacts.
Here, we present an approach to automated segmentation of greenhouse plant images which is based on co-registration of fluorescence (FLU) and of visible light (VIS) camera images followed by subsequent separation of plant and marginal background regions using different species- and camera view-tailored classification models. Our experimental results including a direct comparison with manually segmented ground truth data show that images of different plant types acquired at different developmental stages from different camera views can be automatically segmented with the average accuracy of ( ) using our two-step registration-classification approach.
Automated segmentation of arbitrary greenhouse images exhibiting highly variable optical plant and background appearance represents a challenging task to data classification techniques that rely on detection of invariances. To overcome the limitation of unimodal image analysis, a two-step registration-classification approach to combined analysis of fluorescent and visible light images was developed. Our experimental results show that this algorithmic approach enables accurate segmentation of different FLU/VIS plant images suitable for application in fully automated high-throughput manner.
大量图像数据的自动分割是高通量植物表型分析的主要瓶颈之一。发育中植物的动态光学外观、场景光照不均匀、植物及背景区域中的阴影和反射,使得单峰植物图像的自动分割变得复杂。为克服单峰数据中颜色信息模糊的问题,可将不同模态的图像组合成一个虚拟多光谱立方体。然而,由于植物在不同光照室之间重新定位所导致的运动伪影,多模态图像的对齐常常受到模糊伪影的影响。
在此,我们提出一种温室植物图像自动分割方法,该方法基于荧光(FLU)和可见光(VIS)相机图像的配准,随后使用不同的针对物种和相机视角定制的分类模型分离植物和边缘背景区域。我们的实验结果包括与手动分割的地面真值数据的直接比较,结果表明,使用我们的两步配准 - 分类方法,可以自动分割从不同相机视角在不同发育阶段获取的不同植物类型的图像,平均准确率为( )。
对具有高度可变光学植物和背景外观的任意温室图像进行自动分割,对于依赖不变性检测的数据分类技术来说是一项具有挑战性的任务。为克服单峰图像分析的局限性,开发了一种用于荧光和可见光图像联合分析的两步配准 - 分类方法。我们的实验结果表明,这种算法方法能够准确分割不同的FLU/VIS植物图像,适用于以全自动高通量方式应用。