Suppr超能文献

基于高分二号影像的CTSA-DeepLabV3+城市绿地分类模型研究

Research on CTSA-DeepLabV3+ Urban Green Space Classification Model Based on GF-2 Images.

作者信息

Li Ruotong, Zhao Jian, Fan Yanguo

机构信息

School of Oceanography and Spatial Information, China University of Petroleum, Qingdao 266580, China.

出版信息

Sensors (Basel). 2025 Jun 21;25(13):3862. doi: 10.3390/s25133862.

Abstract

As an important part of urban ecosystems, urban green spaces play a key role in ecological environmental protection and urban spatial structure optimization. However, due to the complex morphology and high degree of fragmentation of urban green spaces, it is still challenging to effectively distinguish urban green space types from high spatial resolution images. To solve the problem, a Contextual Transformer and Squeeze Aggregated Excitation Enhanced DeepLabV3+ (CTSA-DeepLabV3+) model was proposed for urban green space classification based on Gaofen-2 (GF-2) satellite images. A Contextual Transformer (CoT) module was added to the decoder part of the model to enhance the global context modeling capability, and the SENetv2 attention mechanism was employed to improve its key feature capture ability. The experimental results showed that the overall classification accuracy of the CTSA-DeepLabV3+ model is 96.21%, and the average intersection ratio, precision, recall, and F1-score reach 89.22%, 92.56%, 90.12%, and 91.23%, respectively, which is better than DeepLabV3+, Fully Convolutional Networks (FCNs), U-Net (UNet), the Pyramid Scene Parseing Network (PSPNet), UperNet-Swin Transformer, and other mainstream models. The model exhibits higher accuracy and provides efficient references for the intelligent interpretation of urban green space with high-resolution remote sensing images.

摘要

作为城市生态系统的重要组成部分,城市绿地在生态环境保护和城市空间结构优化中发挥着关键作用。然而,由于城市绿地形态复杂且破碎化程度高,从高空间分辨率图像中有效区分城市绿地类型仍具有挑战性。为解决这一问题,提出了一种基于高分二号(GF-2)卫星图像的用于城市绿地分类的上下文变换器和挤压聚合激励增强深度LabV3+(CTSA-DeepLabV3+)模型。在模型的解码器部分添加了上下文变换器(CoT)模块以增强全局上下文建模能力,并采用SENetv2注意力机制提高其关键特征捕捉能力。实验结果表明,CTSA-DeepLabV3+模型的总体分类精度为96.21%,平均交并比、精度、召回率和F1分数分别达到89.22%、92.56%、90.12%和91.23%,优于深度LabV3+、全卷积网络(FCNs)、U-Net(UNet)、金字塔场景解析网络(PSPNet)、UperNet-Swin Transformer等主流模型。该模型具有更高的精度,为利用高分辨率遥感图像对城市绿地进行智能解译提供了有效参考。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f439/12252402/72bcb28e0fc1/sensors-25-03862-g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验