Suppr超能文献

用于遥感露天煤矿场景识别的深浅层特征融合框架

Deep and shallow feature fusion framework for remote sensing open pit coal mine scene recognition.

作者信息

Liu Yang, Zhang Jin

机构信息

School of Mining Engineering, Taiyuan University of Technology, Shanxi, Taiyuan, China.

出版信息

Sci Rep. 2024 Oct 15;14(1):24124. doi: 10.1038/s41598-024-72855-5.

Abstract

Understanding land use and damage in open-pit coal mining areas is crucial for effective scientific oversight and management. Current recognition methods exhibit limitations: traditional approaches depend on manually designed features, which offer limited expressiveness, whereas deep learning techniques are heavily reliant on sample data. In order to overcome the aforementioned limitations, a three-branch feature extraction framework was proposed in the present study. The proposed framework effectively fuses deep features (DF) and shallow features (SF), and can accomplish scene recognition tasks with high accuracy and fewer samples. Deep features are enhanced through a neighbouring feature attention module and a Graph Convolutional Network (GCN) module, which capture both neighbouring features and the correlation between local scene information. Shallow features are extracted using the Gray-Level Co-occurrence Matrix (GLCM) and Gabor filters, which respectively capture local and overall texture variations. Evaluation results on the AID and RSSCN7 datasets demonstrate that the proposed deep feature extraction model achieved classification accuracies of 97.53% and 96.73%, respectively, indicating superior performance in deep feature extraction tasks. Finally, the two kinds of features were fused and input into the particle swarm algorithm optimised support vector machine (PSO-SVM) to classify the scenes of remote sensing images, and the classification accuracy reached 92.78%, outperforming four other classification methods.

摘要

了解露天煤矿区的土地利用和破坏情况对于有效的科学监督和管理至关重要。当前的识别方法存在局限性:传统方法依赖于人工设计的特征,其表达能力有限,而深度学习技术则严重依赖样本数据。为了克服上述局限性,本研究提出了一种三分支特征提取框架。所提出的框架有效地融合了深度特征(DF)和浅层特征(SF),并且能够以较少的样本高精度地完成场景识别任务。深度特征通过邻域特征注意力模块和图卷积网络(GCN)模块得到增强,这两个模块分别捕获邻域特征和局部场景信息之间的相关性。浅层特征使用灰度共生矩阵(GLCM)和Gabor滤波器提取,分别捕获局部和整体纹理变化。在AID和RSSCN7数据集上的评估结果表明,所提出的深度特征提取模型分别实现了97.53%和96.73%的分类准确率,表明在深度特征提取任务中具有卓越的性能。最后,将这两种特征进行融合,并输入到粒子群算法优化的支持向量机(PSO-SVM)中对遥感图像场景进行分类,分类准确率达到92.78%,优于其他四种分类方法。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/78e6/11480329/1acf4199aad9/41598_2024_72855_Fig1_HTML.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验