Divyanth L G, Khanal Salik Ram, Paudel Achyut, Mattupalli Chakradhar, Karkee Manoj
Center for Precision and Automated Agricultural Systems, Department of Biological Systems Engineering, Washington State University, Prosser, WA, United States.
Department of Plant Pathology, Mount Vernon Northwestern Washington Research and Extension Center, Washington State University, Mount Vernon, WA, United States.
Front Plant Sci. 2024 Dec 19;15:1512632. doi: 10.3389/fpls.2024.1512632. eCollection 2024.
Molecular-based detection of pathogens from potato tubers hold promise, but the initial sample extraction process is labor-intensive. Developing a robotic tuber sampling system, equipped with a fast and precise machine vision technique to identify optimal sampling locations on a potato tuber, offers a viable solution. However, detecting sampling locations such as eyes and stolon scar is challenging due to variability in their appearance, size, and shape, along with soil adhering to the tubers. In this study, we addressed these challenges by evaluating various deep-learning-based object detectors, encompassing You Look Only Once (YOLO) variants of YOLOv5, YOLOv6, YOLOv7, YOLOv8, YOLOv9, YOLOv10, and YOLO11, for detecting eyes and stolon scars across a range of diverse potato cultivars. A robust image dataset obtained from tubers of five potato cultivars (three russet skinned, a red skinned, and a purple skinned) was developed as a benchmark for detection of these sampling locations. The mean average precision at an intersection over union threshold of 0.5 (mAP@0.5) ranged from 0.832 and 0.854 with YOLOv5n to 0.903 and 0.914 with YOLOv10l. Among all the tested models, YOLOv10m showed the optimal trade-off between detection accuracy (mAP@0.5 of 0.911) and inference time (92 ms), along with satisfactory generalization performance when cross-validated among the cultivars used in this study. The model benchmarking and inferences of this study provide insights for advancing the development of a robotic potato tuber sampling device.
基于分子的马铃薯块茎病原体检测具有前景,但初始样本提取过程劳动强度大。开发一种配备快速精确机器视觉技术以识别马铃薯块茎上最佳采样位置的机器人块茎采样系统,提供了一个可行的解决方案。然而,由于芽眼和匍匐茎痕等采样位置的外观、大小和形状存在差异,以及块茎上附着有土壤,检测这些位置具有挑战性。在本研究中,我们通过评估各种基于深度学习的目标检测器来应对这些挑战,这些检测器包括You Look Only Once(YOLO)的YOLOv5、YOLOv6、YOLOv7、YOLOv8、YOLOv9、YOLOv10和YOLO11变体,用于检测一系列不同马铃薯品种的芽眼和匍匐茎痕。从五个马铃薯品种(三个褐皮、一个红皮和一个紫皮)的块茎中获得了一个强大的图像数据集,作为检测这些采样位置的基准。在交并比阈值为0.5时的平均精度均值(mAP@0.5)范围从使用YOLOv5n时的0.832和0.854到使用YOLOv10l时的0.903和0.914。在所有测试模型中,YOLOv10m在检测精度(mAP@0.5为0.911)和推理时间(92毫秒)之间表现出最佳平衡,并且在本研究中使用的品种之间进行交叉验证时具有令人满意的泛化性能。本研究的模型基准测试和推理为推进机器人马铃薯块茎采样装置的开发提供了见解。