• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

利用基于图卷积网络的深度学习从遥感图像中提取道路

Utilizing GCN-Based Deep Learning for Road Extraction from Remote Sensing Images.

作者信息

Jiang Yu, Zhao Jiasen, Luo Wei, Guo Bincheng, An Zhulin, Xu Yongjun

机构信息

Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China.

University of Chinese Academy of Sciences, Beijing 100049, China.

出版信息

Sensors (Basel). 2025 Jun 23;25(13):3915. doi: 10.3390/s25133915.

DOI:10.3390/s25133915
PMID:40648174
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12251830/
Abstract

The technology of road extraction serves as a crucial foundation for urban intelligent renewal and green sustainable development. Its outcomes can optimize transportation network planning, reduce resource waste, and enhance urban resilience. Deep learning-based approaches have demonstrated outstanding performance in road extraction, particularly excelling in complex scenarios. However, extracting roads from remote sensing data remains challenging due to several factors that limit accuracy: (1) Roads often share similar visual features with the background, such as rooftops and parking lots, leading to ambiguous inter-class distinctions; (2) Roads in complex environments, such as those occluded by shadows or trees, are difficult to detect. To address these issues, this paper proposes an improved model based on Graph Convolutional Networks (GCNs), named FR-SGCN (Hierarchical Depth-wise Separable Graph Convolutional Network Incorporating Graph Reasoning and Attention Mechanisms). The model is designed to enhance the precision and robustness of road extraction through intelligent techniques, thereby supporting precise planning of green infrastructure. First, high-dimensional features are extracted using ResNeXt, whose grouped convolution structure balances parameter efficiency and feature representation capability, significantly enhancing the expressiveness of the data. These high-dimensional features are then segmented, and enhanced channel and spatial features are obtained via attention mechanisms, effectively mitigating background interference and intra-class ambiguity. Subsequently, a hybrid adjacency matrix construction method is proposed, based on gradient operators and graph reasoning. This method integrates similarity and gradient information and employs graph convolution to capture the global contextual relationships among features. To validate the effectiveness of FR-SGCN, we conducted comparative experiments using 12 different methods on both a self-built dataset and a public dataset. The proposed model achieved the highest F1 score on both datasets. Visualization results from the experiments demonstrate that the model effectively extracts occluded roads and reduces the risk of redundant construction caused by data errors during urban renewal. This provides reliable technical support for smart cities and sustainable development.

摘要

道路提取技术是城市智能更新和绿色可持续发展的关键基础。其成果可优化交通网络规划、减少资源浪费并增强城市韧性。基于深度学习的方法在道路提取方面表现出色,尤其在复杂场景中表现卓越。然而,由于多种因素限制了准确性,从遥感数据中提取道路仍然具有挑战性:(1)道路通常与背景(如屋顶和停车场)具有相似的视觉特征,导致类间区分模糊;(2)复杂环境中的道路,如被阴影或树木遮挡的道路,难以检测。为解决这些问题,本文提出了一种基于图卷积网络(GCN)的改进模型,名为FR-SGCN(融合图推理和注意力机制的分层深度可分离图卷积网络)。该模型旨在通过智能技术提高道路提取的精度和鲁棒性,从而支持绿色基础设施的精确规划。首先,使用ResNeXt提取高维特征,其分组卷积结构平衡了参数效率和特征表示能力,显著增强了数据的表现力。然后对这些高维特征进行分割,并通过注意力机制获得增强的通道和空间特征,有效减轻背景干扰和类内模糊性。随后,提出了一种基于梯度算子和图推理的混合邻接矩阵构建方法。该方法整合了相似性和梯度信息,并采用图卷积来捕捉特征之间的全局上下文关系。为验证FR-SGCN的有效性,我们在自建数据集和公共数据集上使用12种不同方法进行了对比实验。所提出的模型在两个数据集上均获得了最高的F1分数。实验的可视化结果表明,该模型有效地提取了被遮挡的道路,并降低了城市更新过程中数据错误导致的冗余建设风险。这为智慧城市和可持续发展提供了可靠的技术支持。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a5fc/12251830/e1fbe15bb3ac/sensors-25-03915-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a5fc/12251830/026022169503/sensors-25-03915-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a5fc/12251830/9c6de64bfdb5/sensors-25-03915-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a5fc/12251830/6556e9a5b1b6/sensors-25-03915-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a5fc/12251830/bdcc2c48efdc/sensors-25-03915-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a5fc/12251830/0eacbeffe28f/sensors-25-03915-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a5fc/12251830/c77879a07d08/sensors-25-03915-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a5fc/12251830/f1b355f66d21/sensors-25-03915-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a5fc/12251830/476aa0403809/sensors-25-03915-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a5fc/12251830/876781e825d6/sensors-25-03915-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a5fc/12251830/e9dde10f78a2/sensors-25-03915-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a5fc/12251830/9fefd0bbd750/sensors-25-03915-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a5fc/12251830/10126b238d2b/sensors-25-03915-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a5fc/12251830/e1fbe15bb3ac/sensors-25-03915-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a5fc/12251830/026022169503/sensors-25-03915-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a5fc/12251830/9c6de64bfdb5/sensors-25-03915-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a5fc/12251830/6556e9a5b1b6/sensors-25-03915-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a5fc/12251830/bdcc2c48efdc/sensors-25-03915-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a5fc/12251830/0eacbeffe28f/sensors-25-03915-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a5fc/12251830/c77879a07d08/sensors-25-03915-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a5fc/12251830/f1b355f66d21/sensors-25-03915-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a5fc/12251830/476aa0403809/sensors-25-03915-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a5fc/12251830/876781e825d6/sensors-25-03915-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a5fc/12251830/e9dde10f78a2/sensors-25-03915-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a5fc/12251830/9fefd0bbd750/sensors-25-03915-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a5fc/12251830/10126b238d2b/sensors-25-03915-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a5fc/12251830/e1fbe15bb3ac/sensors-25-03915-g013.jpg

相似文献

1
Utilizing GCN-Based Deep Learning for Road Extraction from Remote Sensing Images.利用基于图卷积网络的深度学习从遥感图像中提取道路
Sensors (Basel). 2025 Jun 23;25(13):3915. doi: 10.3390/s25133915.
2
Short-Term Memory Impairment短期记忆障碍
3
Sexual Harassment and Prevention Training性骚扰与预防培训
4
Leveraging a foundation model zoo for cell similarity search in oncological microscopy across devices.利用基础模型库进行跨设备肿瘤显微镜检查中的细胞相似性搜索。
Front Oncol. 2025 Jun 18;15:1480384. doi: 10.3389/fonc.2025.1480384. eCollection 2025.
5
Spatio-temporal transformer and graph convolutional networks based traffic flow prediction.基于时空变换器和图卷积网络的交通流预测
Sci Rep. 2025 Jul 7;15(1):24299. doi: 10.1038/s41598-025-10287-5.
6
Management of urinary stones by experts in stone disease (ESD 2025).结石病专家对尿路结石的管理(2025年结石病专家共识)
Arch Ital Urol Androl. 2025 Jun 30;97(2):14085. doi: 10.4081/aiua.2025.14085.
7
A New Measure of Quantified Social Health Is Associated With Levels of Discomfort, Capability, and Mental and General Health Among Patients Seeking Musculoskeletal Specialty Care.一种新的量化社会健康指标与寻求肌肉骨骼专科护理的患者的不适程度、能力以及心理和总体健康水平相关。
Clin Orthop Relat Res. 2025 Apr 1;483(4):647-663. doi: 10.1097/CORR.0000000000003394. Epub 2025 Feb 5.
8
Insights into gait performance in Parkinson's disease via latent features of deep graph neural networks.通过深度图神经网络的潜在特征洞察帕金森病的步态表现
Front Neurol. 2025 Jun 20;16:1567344. doi: 10.3389/fneur.2025.1567344. eCollection 2025.
9
Systemic pharmacological treatments for chronic plaque psoriasis: a network meta-analysis.系统性药理学治疗慢性斑块状银屑病:网络荟萃分析。
Cochrane Database Syst Rev. 2021 Apr 19;4(4):CD011535. doi: 10.1002/14651858.CD011535.pub4.
10
Comparison of self-administered survey questionnaire responses collected using mobile apps versus other methods.使用移动应用程序与其他方法收集的自我管理调查问卷回复的比较。
Cochrane Database Syst Rev. 2015 Jul 27;2015(7):MR000042. doi: 10.1002/14651858.MR000042.pub2.

本文引用的文献

1
CoANet: Connectivity Attention Network for Road Extraction From Satellite Imagery.CoANet:用于从卫星图像中提取道路的连通性注意力网络。
IEEE Trans Image Process. 2021;30:8540-8552. doi: 10.1109/TIP.2021.3117076. Epub 2021 Oct 13.
2
Deep Feature Aggregation Framework Driven by Graph Convolutional Network for Scene Classification in Remote Sensing.基于图卷积网络驱动的深度特征聚合框架用于遥感场景分类
IEEE Trans Neural Netw Learn Syst. 2022 Oct;33(10):5751-5765. doi: 10.1109/TNNLS.2021.3071369. Epub 2022 Oct 5.
3
A Cross Entropy Based Deep Neural Network Model for Road Extraction from Satellite Images.
一种基于交叉熵的用于从卫星图像中提取道路的深度神经网络模型。
Entropy (Basel). 2020 May 9;22(5):535. doi: 10.3390/e22050535.
4
Building segmentation through a gated graph convolutional neural network with deep structured feature embedding.通过具有深度结构化特征嵌入的门控图卷积神经网络进行建筑物分割。
ISPRS J Photogramm Remote Sens. 2020 Jan;159:184-197. doi: 10.1016/j.isprsjprs.2019.11.004.