Suppr超能文献

一种使用时空融合从近地表RGB图像序列推导小麦物候期的深度学习方法。

A deep learning approach for deriving wheat phenology from near-surface RGB image series using spatiotemporal fusion.

作者信息

Cai Yucheng, Li Yan, Qi Xuerui, Zhao Jianqing, Jiang Li, Tian Yongchao, Zhu Yan, Cao Weixing, Zhang Xiaohu

机构信息

National Engineering and Technology Center for Information Agriculture, Nanjing Agricultural University, Nanjing, 210095, China.

Key Laboratory for Crop System Analysis and Decision Making, Ministry of Agriculture and Rural Affairs, Nanjing, 210095, China.

出版信息

Plant Methods. 2024 Sep 30;20(1):153. doi: 10.1186/s13007-024-01278-0.

Abstract

Accurate monitoring of wheat phenological stages is essential for effective crop management and informed agricultural decision-making. Traditional methods often rely on labour-intensive field surveys, which are prone to subjective bias and limited temporal resolution. To address these challenges, this study explores the potential of near-surface cameras combined with an advanced deep-learning approach to derive wheat phenological stages from high-quality, real-time RGB image series. Three deep learning models based on three different spatiotemporal feature fusion methods, namely sequential fusion, synchronous fusion, and parallel fusion, were constructed and evaluated for deriving wheat phenological stages with these near-surface RGB image series. Moreover, the impact of different image resolutions, capture perspectives, and model training strategies on the performance of deep learning models was also investigated. The results indicate that the model using the sequential fusion method is optimal, with an overall accuracy (OA) of 0.935, a mean absolute error (MAE) of 0.069, F1-score (F1) of 0.936, and kappa coefficients (Kappa) of 0.924 in wheat phenological stages. Besides, the enhanced image resolution of 512 × 512 pixels and a suitable image capture perspective, specifically a sensor viewing angle of 40° to 60° vertically, introduce more effective features for phenological stage detection, thereby enhancing the model's accuracy. Furthermore, concerning the model training, applying a two-step fine-tuning strategy will also enhance the model's robustness to random variations in perspective. This research introduces an innovative approach for real-time phenological stage detection and provides a solid foundation for precision agriculture. By accurately deriving critical phenological stages, the methodology developed in this study supports the optimization of crop management practices, which may result in improved resource efficiency and sustainability across diverse agricultural settings. The implications of this work extend beyond wheat, offering a scalable solution that can be adapted to monitor other crops, thereby contributing to more efficient and sustainable agricultural systems.

摘要

准确监测小麦物候期对于有效的作物管理和明智的农业决策至关重要。传统方法通常依赖于劳动密集型的田间调查,容易出现主观偏差且时间分辨率有限。为应对这些挑战,本研究探索了近地相机结合先进深度学习方法从高质量实时RGB图像序列中推导小麦物候期的潜力。构建并评估了基于三种不同时空特征融合方法(即顺序融合、同步融合和平行融合)的三种深度学习模型,以利用这些近地RGB图像序列推导小麦物候期。此外,还研究了不同图像分辨率、拍摄视角和模型训练策略对深度学习模型性能的影响。结果表明,使用顺序融合方法的模型最优,在小麦物候期方面,总体准确率(OA)为0.935,平均绝对误差(MAE)为0.069,F1分数(F1)为0.936,kappa系数(Kappa)为0.924。此外,512×512像素的增强图像分辨率和合适的图像拍摄视角,特别是垂直方向40°至60°的传感器视角,为物候期检测引入了更有效的特征,从而提高了模型的准确性。此外,关于模型训练,应用两步微调策略也将增强模型对视角随机变化的鲁棒性。本研究引入了一种实时物候期检测的创新方法,为精准农业奠定了坚实基础。通过准确推导关键物候期,本研究开发的方法支持优化作物管理实践,这可能会提高不同农业环境下的资源利用效率和可持续性。这项工作的影响不仅限于小麦,还提供了一种可扩展的解决方案,可用于监测其他作物,从而有助于建立更高效和可持续的农业系统。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8f41/11443927/d86917aba522/13007_2024_1278_Fig1_HTML.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验