Santiago Gustavo N, Cisdeli Magalhaes Pedro H, Carcedo Ana J P, Marziotte Lucia, Mayor Laura, Ciampitti Ignacio A
Department of Agronomy, Kansas State University, Manhattan, KS 66506, USA.
Corteva Agriscience, Wamego, KS 66547, USA.
Plant Phenomics. 2024 Aug 28;6:0234. doi: 10.34133/plantphenomics.0234. eCollection 2024.
High-throughput phenotyping is the bottleneck for advancing field trait characterization and yield improvement in major field crops. Specifically for sorghum ( L.), rapid plant-level yield estimation is highly dependent on characterizing the number of grains within a panicle. In this context, the integration of computer vision and artificial intelligence algorithms with traditional field phenotyping can be a critical solution to reduce labor costs and time. Therefore, this study aims to improve sorghum panicle detection and grain number estimation from smartphone-capture images under field conditions. A preharvest benchmark dataset was collected at field scale (2023 season, Kansas, USA), with 648 images of sorghum panicles retrieved via smartphone device, and grain number counted. Each sorghum panicle image was manually labeled, and the images were augmented. Two models were trained using the Detectron2 and Yolov8 frameworks for detection and segmentation, with an average precision of 75% and 89%, respectively. For the grain number, 3 models were trained: MCNN (multiscale convolutional neural network), TCNN-Seed (two-column CNN-Seed), and Sorghum-Net (developed in this study). The Sorghum-Net model showed a mean absolute percentage error of 17%, surpassing the other models. Lastly, a simple equation was presented to relate the count from the model (using images from only one side of the panicle) to the field-derived observed number of grains per sorghum panicle. The resulting framework obtained an estimation of grain number with a 17% error. The proposed framework lays the foundation for the development of a more robust application to estimate sorghum yield using images from a smartphone at the plant level.
高通量表型分析是推进主要大田作物田间性状表征和产量提高的瓶颈。特别是对于高粱(L.),快速的植株水平产量估计高度依赖于对穗内籽粒数量的表征。在这种情况下,将计算机视觉和人工智能算法与传统田间表型分析相结合可能是降低劳动力成本和时间的关键解决方案。因此,本研究旨在改进田间条件下从智能手机拍摄的图像中检测高粱穗并估计籽粒数量。在田间尺度(2023年季节,美国堪萨斯州)收集了一个收获前基准数据集,通过智能手机设备获取了648张高粱穗图像,并对籽粒数量进行了计数。对每张高粱穗图像进行手动标注,并对图像进行增强处理。使用Detectron2和Yolov8框架训练了两个用于检测和分割的模型,平均精度分别为75%和89%。对于籽粒数量,训练了3个模型:MCNN(多尺度卷积神经网络)、TCNN-Seed(双列CNN-Seed)和Sorghum-Net(本研究开发)。Sorghum-Net模型的平均绝对百分比误差为17%,超过了其他模型。最后,提出了一个简单的方程,将模型的计数(仅使用穗一侧的图像)与田间观察到的每个高粱穗籽粒数量相关联。由此得到的框架对籽粒数量的估计误差为17%。所提出的框架为开发一种更强大的应用奠定了基础,该应用可在植株水平上使用智能手机图像来估计高粱产量。