• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

一种用于近岸水深反演的具有空间位置整合功能的卷积神经网络。

A Convolutional Neural Network with Spatial Location Integration for Nearshore Water Depth Inversion.

作者信息

He Chunlong, Jiang Qigang, Tao Guofang, Zhang Zhenchao

机构信息

College of Geoexploration Science and Technology, Jilin University, Changchun 130026, China.

出版信息

Sensors (Basel). 2023 Oct 16;23(20):8493. doi: 10.3390/s23208493.

DOI:10.3390/s23208493
PMID:37896586
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10610799/
Abstract

Nearshore water depth plays a crucial role in scientific research, navigation management, coastal zone protection, and coastal disaster mitigation. This study aims to address the challenge of insufficient feature extraction from remote sensing data in nearshore water depth inversion. To achieve this, a convolutional neural network with spatial location integration (CNN-SLI) is proposed. The CNN-SLI is designed to extract deep features from remote sensing data by considering the spatial dimension. In this approach, the spatial location information of pixels is utilized as two additional channels, which are concatenated with the input feature image. The resulting concatenated image data are then used as the input for the convolutional neural network. Using GF-6 remote sensing images and measured water depth data from electronic nautical charts, a nearshore water depth inversion experiment was conducted in the waters near Nanshan Port. The results of the proposed method were compared with those of the Lyzenga, MLP, and CNN models. The CNN-SLI model demonstrated outstanding performance in water depth inversion, with impressive metrics: an RMSE of 1.34 m, MAE of 0.94 m, and R of 0.97. It outperformed all other models in terms of overall inversion accuracy and regression fit. Regardless of the water depth intervals, CNN-SLI consistently achieved the lowest RMSE and MAE values, indicating excellent performance in both shallow and deep waters. Comparative analysis with Kriging confirmed that the CNN-SLI model best matched the interpolated water depth, further establishing its superiority over the Lyzenga, MLP, and CNN models. Notably, in this study area, the CNN-SLI model exhibited significant performance advantages when trained with at least 250 samples, resulting in optimal inversion results. Accuracy evaluation on an independent dataset shows that the CNN-SLI model has better generalization ability than the Lyzenga, MLP, and CNN models under different conditions. These results demonstrate the superiority of CNN-SLI for nearshore water depth inversion and highlight the importance of integrating spatial location information into convolutional neural networks for improved performance.

摘要

近岸水深在科学研究、航海管理、海岸带保护和海岸减灾中起着至关重要的作用。本研究旨在应对近岸水深反演中遥感数据特征提取不足的挑战。为此,提出了一种具有空间位置整合的卷积神经网络(CNN-SLI)。CNN-SLI旨在通过考虑空间维度从遥感数据中提取深度特征。在这种方法中,像素的空间位置信息被用作两个额外的通道,与输入特征图像连接。然后将得到的连接图像数据用作卷积神经网络的输入。利用GF-6遥感影像和电子海图实测水深数据,在南山港附近海域进行了近岸水深反演实验。将该方法的结果与Lyzenga、MLP和CNN模型的结果进行了比较。CNN-SLI模型在水深反演中表现出卓越的性能,指标令人印象深刻:均方根误差(RMSE)为1.34米,平均绝对误差(MAE)为0.94米,相关系数(R)为0.97。在整体反演精度和回归拟合方面,它优于所有其他模型。无论水深区间如何,CNN-SLI始终实现最低的RMSE和MAE值,表明在浅水和深水中均具有出色的性能。与克里金法的对比分析证实,CNN-SLI模型与插值水深的匹配度最佳,进一步确立了其相对于Lyzenga、MLP和CNN模型的优越性。值得注意的是,在本研究区域,当使用至少250个样本进行训练时,CNN-SLI模型表现出显著的性能优势,从而得到最优的反演结果。在独立数据集上的精度评估表明,在不同条件下,CNN-SLI模型比Lyzenga、MLP和CNN模型具有更好的泛化能力。这些结果证明了CNN-SLI在近岸水深反演方面的优越性,并突出了将空间位置信息整合到卷积神经网络中以提高性能的重要性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac2b/10610799/10fcbdb5817e/sensors-23-08493-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac2b/10610799/ec1b9680fba1/sensors-23-08493-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac2b/10610799/825ff9d4d9f1/sensors-23-08493-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac2b/10610799/a39c0e6fc5e5/sensors-23-08493-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac2b/10610799/427cec57a37d/sensors-23-08493-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac2b/10610799/7a5be4dd687c/sensors-23-08493-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac2b/10610799/a236a78e31c8/sensors-23-08493-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac2b/10610799/cc87e95e68a6/sensors-23-08493-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac2b/10610799/87f0c35b79d9/sensors-23-08493-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac2b/10610799/e8c11c89dfa2/sensors-23-08493-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac2b/10610799/804a9568c191/sensors-23-08493-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac2b/10610799/7c3170636a4d/sensors-23-08493-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac2b/10610799/f08a0eba7a57/sensors-23-08493-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac2b/10610799/b69c3e195c35/sensors-23-08493-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac2b/10610799/3b09ee74d3ac/sensors-23-08493-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac2b/10610799/10fcbdb5817e/sensors-23-08493-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac2b/10610799/ec1b9680fba1/sensors-23-08493-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac2b/10610799/825ff9d4d9f1/sensors-23-08493-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac2b/10610799/a39c0e6fc5e5/sensors-23-08493-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac2b/10610799/427cec57a37d/sensors-23-08493-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac2b/10610799/7a5be4dd687c/sensors-23-08493-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac2b/10610799/a236a78e31c8/sensors-23-08493-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac2b/10610799/cc87e95e68a6/sensors-23-08493-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac2b/10610799/87f0c35b79d9/sensors-23-08493-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac2b/10610799/e8c11c89dfa2/sensors-23-08493-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac2b/10610799/804a9568c191/sensors-23-08493-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac2b/10610799/7c3170636a4d/sensors-23-08493-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac2b/10610799/f08a0eba7a57/sensors-23-08493-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac2b/10610799/b69c3e195c35/sensors-23-08493-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac2b/10610799/3b09ee74d3ac/sensors-23-08493-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ac2b/10610799/10fcbdb5817e/sensors-23-08493-g015.jpg

相似文献

1
A Convolutional Neural Network with Spatial Location Integration for Nearshore Water Depth Inversion.一种用于近岸水深反演的具有空间位置整合功能的卷积神经网络。
Sensors (Basel). 2023 Oct 16;23(20):8493. doi: 10.3390/s23208493.
2
Application of mask R-CNN for building detection in UAV remote sensing images.Mask R-CNN在无人机遥感影像建筑物检测中的应用。
Heliyon. 2024 Sep 19;10(19):e38141. doi: 10.1016/j.heliyon.2024.e38141. eCollection 2024 Oct 15.
3
An Efficient Building Extraction Method from High Spatial Resolution Remote Sensing Images Based on Improved Mask R-CNN.基于改进的 Mask R-CNN 的高空间分辨率遥感图像高效建筑物提取方法。
Sensors (Basel). 2020 Mar 6;20(5):1465. doi: 10.3390/s20051465.
4
Remote sensing image analysis and prediction based on improved Pix2Pix model for water environment protection of smart cities.基于改进型Pix2Pix模型的遥感图像分析与预测用于智慧城市水环境的保护
PeerJ Comput Sci. 2023 Apr 26;9:e1292. doi: 10.7717/peerj-cs.1292. eCollection 2023.
5
Research on Bathymetric Inversion Capability of Different Multispectral Remote Sensing Images in Seaports.不同多光谱遥感图像在港口水深反演能力的研究。
Sensors (Basel). 2023 Jan 19;23(3):1178. doi: 10.3390/s23031178.
6
Dual-Coupled CNN-GCN-Based Classification for Hyperspectral and LiDAR Data.基于双耦合 CNN-GCN 的高光谱和 LiDAR 数据分类。
Sensors (Basel). 2022 Jul 31;22(15):5735. doi: 10.3390/s22155735.
7
Addressing Challenges in Port Depth Analysis: Integrating Machine Learning and Spatial Information for Accurate Remote Sensing of Turbid Waters.应对港口深度分析中的挑战:整合机器学习与空间信息以实现对浑浊水域的精确遥感
Sensors (Basel). 2024 Jun 12;24(12):3802. doi: 10.3390/s24123802.
8
One-dimensional convolutional neural network and hybrid deep-learning paradigm for classification of specific language impaired children using their speech.基于一维卷积神经网络和混合深度学习范式的特定语言损伤儿童语音分类方法
Comput Methods Programs Biomed. 2022 Jan;213:106487. doi: 10.1016/j.cmpb.2021.106487. Epub 2021 Oct 22.
9
A transfer learning-based CNN and LSTM hybrid deep learning model to classify motor imagery EEG signals.一种基于迁移学习的卷积神经网络和长短期记忆网络混合深度学习模型,用于对运动想象脑电信号进行分类。
Comput Biol Med. 2022 Apr;143:105288. doi: 10.1016/j.compbiomed.2022.105288. Epub 2022 Feb 10.
10
Using dual-channel CNN to classify hyperspectral image based on spatial-spectral information.基于空间-光谱信息的双通道 CNN 对高光谱图像进行分类。
Math Biosci Eng. 2020 May 2;17(4):3450-3477. doi: 10.3934/mbe.2020195.

引用本文的文献

1
Corrosion Risk Assessment in Coastal Environments Using Machine Learning-Based Predictive Models.基于机器学习预测模型的沿海环境腐蚀风险评估
Sensors (Basel). 2025 Jul 7;25(13):4231. doi: 10.3390/s25134231.

本文引用的文献

1
Satellite-derived bathymetry based on machine learning models and an updated quasi-analytical algorithm approach.基于机器学习模型和更新的准解析算法方法的卫星测深技术。
Opt Express. 2022 May 9;30(10):16773-16793. doi: 10.1364/OE.456094.
2
A comparison of Landsat 8, RapidEye and Pleiades products for improving empirical predictions of satellite-derived bathymetry.用于改进卫星衍生测深法经验预测的陆地卫星8号、快鸟卫星和昴星团卫星产品比较。
Remote Sens Environ. 2019 Nov;233:111414. doi: 10.1016/j.rse.2019.111414.
3
Deep learning in neural networks: an overview.
神经网络中的深度学习:综述。
Neural Netw. 2015 Jan;61:85-117. doi: 10.1016/j.neunet.2014.09.003. Epub 2014 Oct 13.
4
Water depth mapping from passive remote sensing data under a generalized ratio assumption.基于广义比率假设的被动遥感数据水深测绘
Appl Opt. 1983 Apr 15;22(8):1134-5. doi: 10.1364/AO.22.001134.