• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于遥感露天煤矿场景识别的深浅层特征融合框架

Deep and shallow feature fusion framework for remote sensing open pit coal mine scene recognition.

作者信息

Liu Yang, Zhang Jin

机构信息

School of Mining Engineering, Taiyuan University of Technology, Shanxi, Taiyuan, China.

出版信息

Sci Rep. 2024 Oct 15;14(1):24124. doi: 10.1038/s41598-024-72855-5.

DOI:10.1038/s41598-024-72855-5
PMID:39406759
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11480329/
Abstract

Understanding land use and damage in open-pit coal mining areas is crucial for effective scientific oversight and management. Current recognition methods exhibit limitations: traditional approaches depend on manually designed features, which offer limited expressiveness, whereas deep learning techniques are heavily reliant on sample data. In order to overcome the aforementioned limitations, a three-branch feature extraction framework was proposed in the present study. The proposed framework effectively fuses deep features (DF) and shallow features (SF), and can accomplish scene recognition tasks with high accuracy and fewer samples. Deep features are enhanced through a neighbouring feature attention module and a Graph Convolutional Network (GCN) module, which capture both neighbouring features and the correlation between local scene information. Shallow features are extracted using the Gray-Level Co-occurrence Matrix (GLCM) and Gabor filters, which respectively capture local and overall texture variations. Evaluation results on the AID and RSSCN7 datasets demonstrate that the proposed deep feature extraction model achieved classification accuracies of 97.53% and 96.73%, respectively, indicating superior performance in deep feature extraction tasks. Finally, the two kinds of features were fused and input into the particle swarm algorithm optimised support vector machine (PSO-SVM) to classify the scenes of remote sensing images, and the classification accuracy reached 92.78%, outperforming four other classification methods.

摘要

了解露天煤矿区的土地利用和破坏情况对于有效的科学监督和管理至关重要。当前的识别方法存在局限性:传统方法依赖于人工设计的特征,其表达能力有限,而深度学习技术则严重依赖样本数据。为了克服上述局限性,本研究提出了一种三分支特征提取框架。所提出的框架有效地融合了深度特征(DF)和浅层特征(SF),并且能够以较少的样本高精度地完成场景识别任务。深度特征通过邻域特征注意力模块和图卷积网络(GCN)模块得到增强,这两个模块分别捕获邻域特征和局部场景信息之间的相关性。浅层特征使用灰度共生矩阵(GLCM)和Gabor滤波器提取,分别捕获局部和整体纹理变化。在AID和RSSCN7数据集上的评估结果表明,所提出的深度特征提取模型分别实现了97.53%和96.73%的分类准确率,表明在深度特征提取任务中具有卓越的性能。最后,将这两种特征进行融合,并输入到粒子群算法优化的支持向量机(PSO-SVM)中对遥感图像场景进行分类,分类准确率达到92.78%,优于其他四种分类方法。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/78e6/11480329/28ea36edfc99/41598_2024_72855_Fig19_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/78e6/11480329/1acf4199aad9/41598_2024_72855_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/78e6/11480329/4edb08c12376/41598_2024_72855_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/78e6/11480329/130009d8846b/41598_2024_72855_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/78e6/11480329/86ffcb38884c/41598_2024_72855_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/78e6/11480329/90db81714678/41598_2024_72855_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/78e6/11480329/a4e4eb58abf3/41598_2024_72855_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/78e6/11480329/016fc9ff7ef6/41598_2024_72855_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/78e6/11480329/7f04110633c1/41598_2024_72855_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/78e6/11480329/84f34a748324/41598_2024_72855_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/78e6/11480329/f7791250f9bf/41598_2024_72855_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/78e6/11480329/346d56a480a9/41598_2024_72855_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/78e6/11480329/2fba391403d8/41598_2024_72855_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/78e6/11480329/6ff70c2339c0/41598_2024_72855_Fig13_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/78e6/11480329/55991f0795e9/41598_2024_72855_Fig14_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/78e6/11480329/fc329c1929cf/41598_2024_72855_Fig15_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/78e6/11480329/d40b05e0b2a7/41598_2024_72855_Fig16_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/78e6/11480329/237f30730414/41598_2024_72855_Fig17_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/78e6/11480329/19c7703ad327/41598_2024_72855_Fig18_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/78e6/11480329/28ea36edfc99/41598_2024_72855_Fig19_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/78e6/11480329/1acf4199aad9/41598_2024_72855_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/78e6/11480329/4edb08c12376/41598_2024_72855_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/78e6/11480329/130009d8846b/41598_2024_72855_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/78e6/11480329/86ffcb38884c/41598_2024_72855_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/78e6/11480329/90db81714678/41598_2024_72855_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/78e6/11480329/a4e4eb58abf3/41598_2024_72855_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/78e6/11480329/016fc9ff7ef6/41598_2024_72855_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/78e6/11480329/7f04110633c1/41598_2024_72855_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/78e6/11480329/84f34a748324/41598_2024_72855_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/78e6/11480329/f7791250f9bf/41598_2024_72855_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/78e6/11480329/346d56a480a9/41598_2024_72855_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/78e6/11480329/2fba391403d8/41598_2024_72855_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/78e6/11480329/6ff70c2339c0/41598_2024_72855_Fig13_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/78e6/11480329/55991f0795e9/41598_2024_72855_Fig14_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/78e6/11480329/fc329c1929cf/41598_2024_72855_Fig15_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/78e6/11480329/d40b05e0b2a7/41598_2024_72855_Fig16_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/78e6/11480329/237f30730414/41598_2024_72855_Fig17_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/78e6/11480329/19c7703ad327/41598_2024_72855_Fig18_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/78e6/11480329/28ea36edfc99/41598_2024_72855_Fig19_HTML.jpg

相似文献

1
Deep and shallow feature fusion framework for remote sensing open pit coal mine scene recognition.用于遥感露天煤矿场景识别的深浅层特征融合框架
Sci Rep. 2024 Oct 15;14(1):24124. doi: 10.1038/s41598-024-72855-5.
2
Deep Feature Aggregation Framework Driven by Graph Convolutional Network for Scene Classification in Remote Sensing.基于图卷积网络驱动的深度特征聚合框架用于遥感场景分类
IEEE Trans Neural Netw Learn Syst. 2022 Oct;33(10):5751-5765. doi: 10.1109/TNNLS.2021.3071369. Epub 2022 Oct 5.
3
Deep Learning for Feature Extraction in Remote Sensing: A Case-Study of Aerial Scene Classification.深度学习在遥感特征提取中的应用:以航空场景分类为例。
Sensors (Basel). 2020 Jul 14;20(14):3906. doi: 10.3390/s20143906.
4
Remote Sensing Image Classification Based on Canny Operator Enhanced Edge Features.基于Canny算子增强边缘特征的遥感图像分类
Sensors (Basel). 2024 Jun 17;24(12):3912. doi: 10.3390/s24123912.
5
An Efficient and Lightweight Convolutional Neural Network for Remote Sensing Image Scene Classification.一种用于遥感图像场景分类的高效轻量级卷积神经网络。
Sensors (Basel). 2020 Apr 2;20(7):1999. doi: 10.3390/s20071999.
6
A Two-Stream Deep Fusion Framework for High-Resolution Aerial Scene Classification.基于双流深度融合的高分辨率航空场景分类框架。
Comput Intell Neurosci. 2018 Jan 18;2018:8639367. doi: 10.1155/2018/8639367. eCollection 2018.
7
Improving remote sensing scene classification using dung Beetle optimization with enhanced deep learning approach.利用带有增强深度学习方法的蜣螂优化算法改进遥感场景分类
Heliyon. 2024 Aug 30;10(18):e37154. doi: 10.1016/j.heliyon.2024.e37154. eCollection 2024 Sep 30.
8
A Study for Texture Feature Extraction of High-Resolution Satellite Images Based on a Direction Measure and Gray Level Co-Occurrence Matrix Fusion Algorithm.基于方向测度与灰度共生矩阵融合算法的高分辨率卫星图像纹理特征提取研究
Sensors (Basel). 2017 Jun 22;17(7):1474. doi: 10.3390/s17071474.
9
SAGN: Semantic-Aware Graph Network for Remote Sensing Scene Classification.SAGN:用于遥感场景分类的语义感知图网络。
IEEE Trans Image Process. 2023;32:1011-1025. doi: 10.1109/TIP.2023.3238310. Epub 2023 Jan 31.
10
Dual-Coupled CNN-GCN-Based Classification for Hyperspectral and LiDAR Data.基于双耦合 CNN-GCN 的高光谱和 LiDAR 数据分类。
Sensors (Basel). 2022 Jul 31;22(15):5735. doi: 10.3390/s22155735.

引用本文的文献

1
Privacy-preserving federated learning for collaborative medical data mining in multi-institutional settings.多机构环境下用于协作医学数据挖掘的隐私保护联邦学习
Sci Rep. 2025 Apr 11;15(1):12482. doi: 10.1038/s41598-025-97565-4.

本文引用的文献

1
Landslide risk evaluation method of open-pit mine based on numerical simulation of large deformation of landslide.基于滑坡大变形数值模拟的露天矿滑坡风险评估方法
Sci Rep. 2023 Sep 16;13(1):15410. doi: 10.1038/s41598-023-42736-4.
2
Gray level co-occurrence matrix (GLCM) texture based crop classification using low altitude remote sensing platforms.基于灰度共生矩阵(GLCM)纹理的利用低空遥感平台进行作物分类
PeerJ Comput Sci. 2021 May 19;7:e536. doi: 10.7717/peerj-cs.536. eCollection 2021.