Suppr超能文献

基于多分辨率密集编码器和解码器网络的合成孔径雷达图像水和阴影自动提取

Automatic Extraction of Water and Shadow from SAR Images Based on a Multi-Resolution Dense Encoder and Decoder Network.

作者信息

Zhang Peng, Chen Lifu, Li Zhenhong, Xing Jin, Xing Xuemin, Yuan Zhihui

机构信息

School of Electrical and Information Engineering, Changsha University of Science & Technology, Changsha 410114, China.

Laboratory of Radar Remote Sensing Applications, Changsha University of Science & Technology, Changsha 410014, China.

出版信息

Sensors (Basel). 2019 Aug 16;19(16):3576. doi: 10.3390/s19163576.

Abstract

The water and shadow areas in SAR images contain rich information for various applications, which cannot be extracted automatically and precisely at present. To handle this problem, a new framework called Multi-Resolution Dense Encoder and Decoder (MRDED) network is proposed, which integrates Convolutional Neural Network (CNN), Residual Network (ResNet), Dense Convolutional Network (DenseNet), Global Convolutional Network (GCN), and Convolutional Long Short-Term Memory (ConvLSTM). MRDED contains three parts: the Gray Level Gradient Co-occurrence Matrix (GLGCM), the Encoder network, and the Decoder network. GLGCM is used to extract low-level features, which are further processed by the Encoder. The Encoder network employs ResNet to extract features at different resolutions. There are two components of the Decoder network, namely, the Multi-level Features Extraction and Fusion (MFEF) and Score maps Fusion (SF). We implement two versions of MFEF, named MFEF1 and MFEF2, which generate separate score maps. The difference between them lies in that the Chained Residual Pooling (CRP) module is utilized in MFEF2, while ConvLSTM is adopted in MFEF1 to form the Improved Chained Residual Pooling (ICRP) module as the replacement. The two separate score maps generated by MFEF1 and MFEF2 are fused with different weights to produce the fused score map, which is further handled by the Softmax function to generate the final extraction results for water and shadow areas. To evaluate the proposed framework, MRDED is trained and tested with large SAR images. To further assess the classification performance, a total of eight different classification frameworks are compared with our proposed framework. MRDED outperformed by reaching 80.12% in Pixel Accuracy (PA) and 73.88% in Intersection of Union (IoU) for water, 88% in PA and 77.11% in IoU for shadow, and 95.16% in PA and 90.49% in IoU for background classification, respectively.

摘要

合成孔径雷达(SAR)图像中的水体和阴影区域包含丰富信息,可用于各种应用,但目前尚无法自动且精确地提取这些信息。为解决这一问题,提出了一种名为多分辨率密集编码器和解码器(MRDED)网络的新框架,该框架集成了卷积神经网络(CNN)、残差网络(ResNet)、密集卷积网络(DenseNet)、全局卷积网络(GCN)和卷积长短期记忆网络(ConvLSTM)。MRDED包含三个部分:灰度梯度共生矩阵(GLGCM)、编码器网络和解码器网络。GLGCM用于提取低级特征,这些特征由编码器进一步处理。编码器网络采用ResNet在不同分辨率下提取特征。解码器网络有两个组件,即多级特征提取与融合(MFEF)和得分图融合(SF)。我们实现了两个版本的MFEF,分别命名为MFEF1和MFEF2,它们生成单独的得分图。它们之间的区别在于,MFEF2中使用了链式残差池化(CRP)模块,而MFEF1中采用ConvLSTM形成改进的链式残差池化(ICRP)模块作为替代。由MFEF1和MFEF2生成的两个单独得分图通过不同权重融合,以产生融合得分图,该融合得分图再由Softmax函数进一步处理,以生成水体和阴影区域的最终提取结果。为评估所提出的框架,使用大型SAR图像对MRDED进行训练和测试。为进一步评估分类性能,将总共八个不同的分类框架与我们提出的框架进行比较。MRDED表现出色,水体的像素精度(PA)达到80.12%,交并比(IoU)达到73.88%;阴影的PA达到88%,IoU达到77.11%;背景分类的PA达到95.16%,IoU达到90.49%。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c0a3/6719083/ec135cecff55/sensors-19-03576-g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验