Suppr超能文献

基于U-Net融合多模态遥感时间特征的作物分类研究

Research on Crop Classification Using U-Net Integrated with Multimodal Remote Sensing Temporal Features.

作者信息

Zhu Zhihui, Chen Yuling, Lu Chengzhuo, Yang Minglong, Xia Yonghua, Huang Dewu, Lv Jie

机构信息

Department of Earth Science and Technology, City College, Kunming University of Science and Technology, Kunming 650093, China.

School of Geography and Information Engineering, China University of Geosciences (Wuhan), Wuhan 430078, China.

出版信息

Sensors (Basel). 2025 Aug 13;25(16):5005. doi: 10.3390/s25165005.

Abstract

Crop classification plays a vital role in acquiring the spatial distribution of agricultural crops, enhancing agricultural management efficiency, and ensuring food security. With the continuous advancement of remote sensing technologies, achieving efficient and accurate crop classification using remote sensing imagery has become a prominent research focus. Conventional approaches largely rely on empirical rules or single-feature selection (e.g., NDVI or VV) for temporal feature extraction, lacking systematic optimization of multimodal feature combinations from optical and radar data. To address this limitation, this study proposes a crop classification method based on feature-level fusion of multimodal remote sensing data, integrating the complementary advantages of optical and SAR imagery to overcome the temporal and spatial representation constraints of single-sensor observations. The study was conducted in Story County, Iowa, USA, focusing on the growth cycles of corn and soybean. Eight vegetation indices (including NDVI and NDRE) and five polarimetric features (VV and VH) were constructed and analyzed. Using a random forest algorithm to assess feature importance, NDVI+NDRE and VV+VH were identified as the optimal feature combinations. Subsequently, 16 scenes of optical imagery (Sentinel-2) and 30 scenes of radar imagery (Sentinel-1) were fused at the feature level to generate a multimodal temporal feature image with 46 channels. Using Cropland Data Layer (CDL) samples as reference data, a U-Net deep neural network was employed for refined crop classification and compared with single-modal results. Experimental results demonstrated that the fusion model outperforms single-modal approaches in classification accuracy, boundary delineation, and consistency, achieving training, validation, and test accuracies of 95.83%, 91.99%, and 90.81% respectively. Furthermore, consistent improvements were observed across evaluation metrics, including F1-score, precision, and recall.

摘要

作物分类在获取农作物空间分布、提高农业管理效率和确保粮食安全方面发挥着至关重要的作用。随着遥感技术的不断进步,利用遥感影像实现高效准确的作物分类已成为一个突出的研究重点。传统方法在很大程度上依赖经验规则或单特征选择(如归一化植被指数(NDVI)或垂直极化(VV))进行时间特征提取,缺乏对光学和雷达数据多模态特征组合的系统优化。为解决这一局限性,本研究提出一种基于多模态遥感数据特征级融合的作物分类方法,整合光学和合成孔径雷达(SAR)影像的互补优势,以克服单传感器观测的时空表征限制。该研究在美国爱荷华州斯托里县开展,聚焦于玉米和大豆的生长周期。构建并分析了八个植被指数(包括NDVI和归一化差值红边指数(NDRE))以及五个极化特征(VV和水平极化(VH))。使用随机森林算法评估特征重要性,确定NDVI+NDRE和VV+VH为最优特征组合。随后,在特征层面将16景光学影像(哨兵-2)和30景雷达影像(哨兵-1)进行融合,生成具有46个通道的多模态时间特征影像。以农田数据层(CDL)样本作为参考数据,采用U型网络深度神经网络进行精细作物分类,并与单模态结果进行比较。实验结果表明,融合模型在分类精度、边界划定和一致性方面优于单模态方法,训练、验证和测试精度分别达到95.83%、91.99%和90.81%。此外,在包括F1分数、精确率和召回率在内的各项评估指标上均观察到一致的提升。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验