Chen Bangbang, Ding Feng, Ma Baojian, Yao Qijun, Ning Shanping
School of Mechatronic Engineering, Xi'an Technological University, Xi'an, 710021, China.
School of Mechatronic Engineering, Xinjiang Institute of Technology, Aksu, 843100, China.
Sci Rep. 2025 Mar 29;15(1):10851. doi: 10.1038/s41598-025-95620-8.
To address the challenges encountered by safflower filament harvesting robots in detecting and localizing harvesting points in unstructured environments, this study proposes a harvesting point detection and localization model based on the DSOE (Detect-Segment-OpenCV Extraction) method, integrated with a localization system using a depth camera. Firstly, the YOLO-SaFi model was employed to optimize the classification of a safflower filament dataset, identifying harvestable safflower filaments for further study. Secondly, a novel lightweight segmentation detection head (LSDH) was introduced, based on the YOLO-SaFi model, to efficiently segment safflower filaments and fruit balls. Using the OpenCV toolkit, contour information of the safflower filaments and fruit balls was extracted. The centroid connection and intersection with the safflower filament contour were used to determine the 2D harvesting points. Finally, a localization control system was developed based on the Delta robotic arm and depth camera to precisely determine the spatial harvesting point locations. Experimental results indicate that the improved YOLO-SaFi-LSDH model reduces the model size by 30.2%, while achieving segmentation accuracy, recall rate, and average precision of 95.0%, 95.0%, and 96.8%, respectively, significantly outperforming conventional detection heads. Additionally, the localization system demonstrated an overall detection success rate of 91.0%, with localization errors controlled within an average of 2.42 mm in the x-axis, 2.86 mm in the y-axis, and 3.18 mm in the z-axis. These results show that the proposed model exhibits superior detection and localization performance in complex environments, providing a solid theoretical foundation for the development of intelligent safflower filament harvesting robots.
为解决红花花丝采摘机器人在非结构化环境中检测和定位采摘点时遇到的挑战,本研究提出了一种基于DSOE(检测-分割-OpenCV提取)方法的采摘点检测与定位模型,并集成了使用深度相机的定位系统。首先,采用YOLO-SaFi模型对红花花丝数据集进行分类优化,识别出可采摘的红花花丝以供进一步研究。其次,基于YOLO-SaFi模型引入了一种新型轻量级分割检测头(LSDH),以有效分割红花花丝和果实球。使用OpenCV工具包提取红花花丝和果实球的轮廓信息。利用质心连接和与红花花丝轮廓的交点来确定二维采摘点。最后,基于Delta机器人手臂和深度相机开发了一种定位控制系统,以精确确定空间采摘点的位置。实验结果表明,改进后的YOLO-SaFi-LSDH模型的模型大小减少了30.2%,同时分割准确率、召回率和平均精度分别达到95.0%、95.0%和96.8%,显著优于传统检测头。此外,定位系统的整体检测成功率为91.0%,定位误差在x轴上平均控制在2.42毫米以内,在y轴上为2.86毫米,在z轴上为3.18毫米。这些结果表明,所提出的模型在复杂环境中具有卓越的检测和定位性能,为智能红花花丝采摘机器人的开发提供了坚实的理论基础。