• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于 GAN 和多特征融合的河流行进速度预测与估计

Prediction and Estimation of River Velocity Based on GAN and Multifeature Fusion.

机构信息

School of Artificial Intelligence, Xi'an Aeronautical Polytechnic Institute, Xi'an 710089, China.

School of Information Engineering, Chang'an University, Xi'an 710064, China.

出版信息

Comput Intell Neurosci. 2022 Aug 21;2022:7316133. doi: 10.1155/2022/7316133. eCollection 2022.

DOI:10.1155/2022/7316133
PMID:36045976
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9420598/
Abstract

The necessity of predicting and estimating river velocity motivates the development of a prediction method based on GAN image enhancement and multifeature fusion. In this method, in order to improve the image quality of river velocity, GAN network is used to enhance the image, so as to improve the integrity of image data set. In order to improve the accuracy of prediction, the image is extracted and fused with multiple features, and the extracted multiple features are taken as the input of CNN, so as to improve the prediction accuracy of convolution neural network. The results show that when the velocity is 0.25 m/s, 0.50 m/s, and 0.75 m/s, the accuracy of improved method can reach 85%, 90%, and 92%, which are higher than SVM, VGG-16, and BPNET algorithms. The above results indicate that the improvement has certain positive value and practical application value.

摘要

预测和估计河速的必要性促使人们开发了一种基于 GAN 图像增强和多特征融合的预测方法。在该方法中,为了提高河速图像的质量,使用 GAN 网络对图像进行增强,从而提高图像数据集的完整性。为了提高预测的准确性,提取并融合了多个特征,将提取的多个特征作为 CNN 的输入,从而提高卷积神经网络的预测精度。结果表明,当速度为 0.25 m/s、0.50 m/s 和 0.75 m/s 时,改进方法的准确率可达 85%、90%和 92%,高于 SVM、VGG-16 和 BPNET 算法。上述结果表明,改进具有一定的积极价值和实际应用价值。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cb2/9420598/c4e6d439fe9b/CIN2022-7316133.015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cb2/9420598/1451b88d2643/CIN2022-7316133.001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cb2/9420598/b775ee832718/CIN2022-7316133.002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cb2/9420598/3caf43f8a488/CIN2022-7316133.003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cb2/9420598/0e6edf50dd98/CIN2022-7316133.004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cb2/9420598/bfcbe858531f/CIN2022-7316133.005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cb2/9420598/cc3df722e68d/CIN2022-7316133.006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cb2/9420598/60ad39e8b1cc/CIN2022-7316133.007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cb2/9420598/4be5dcae2363/CIN2022-7316133.008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cb2/9420598/a7060d9a185b/CIN2022-7316133.009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cb2/9420598/9ff2ec352ad9/CIN2022-7316133.010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cb2/9420598/de29c1109370/CIN2022-7316133.011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cb2/9420598/d8995d96ae50/CIN2022-7316133.012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cb2/9420598/ff04cda044a1/CIN2022-7316133.013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cb2/9420598/d3a364820332/CIN2022-7316133.014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cb2/9420598/c4e6d439fe9b/CIN2022-7316133.015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cb2/9420598/1451b88d2643/CIN2022-7316133.001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cb2/9420598/b775ee832718/CIN2022-7316133.002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cb2/9420598/3caf43f8a488/CIN2022-7316133.003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cb2/9420598/0e6edf50dd98/CIN2022-7316133.004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cb2/9420598/bfcbe858531f/CIN2022-7316133.005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cb2/9420598/cc3df722e68d/CIN2022-7316133.006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cb2/9420598/60ad39e8b1cc/CIN2022-7316133.007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cb2/9420598/4be5dcae2363/CIN2022-7316133.008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cb2/9420598/a7060d9a185b/CIN2022-7316133.009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cb2/9420598/9ff2ec352ad9/CIN2022-7316133.010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cb2/9420598/de29c1109370/CIN2022-7316133.011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cb2/9420598/d8995d96ae50/CIN2022-7316133.012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cb2/9420598/ff04cda044a1/CIN2022-7316133.013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cb2/9420598/d3a364820332/CIN2022-7316133.014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1cb2/9420598/c4e6d439fe9b/CIN2022-7316133.015.jpg

相似文献

1
Prediction and Estimation of River Velocity Based on GAN and Multifeature Fusion.基于 GAN 和多特征融合的河流行进速度预测与估计
Comput Intell Neurosci. 2022 Aug 21;2022:7316133. doi: 10.1155/2022/7316133. eCollection 2022.
2
A Hemolysis Image Detection Method Based on GAN-CNN-ELM.基于 GAN-CNN-ELM 的溶血图像检测方法。
Comput Math Methods Med. 2022 Feb 22;2022:1558607. doi: 10.1155/2022/1558607. eCollection 2022.
3
Embryo development stage prediction algorithm for automated time lapse incubators.自动化时差培养箱胚胎发育阶段预测算法。
Comput Methods Programs Biomed. 2019 Aug;177:161-174. doi: 10.1016/j.cmpb.2019.05.027. Epub 2019 May 29.
4
Using convolutional neural network for predicting cyanobacteria concentrations in river water.利用卷积神经网络预测河水中蓝藻浓度。
Water Res. 2020 Nov 1;186:116349. doi: 10.1016/j.watres.2020.116349. Epub 2020 Aug 26.
5
Shape constrained fully convolutional DenseNet with adversarial training for multiorgan segmentation on head and neck CT and low-field MR images.基于对抗训练的形状约束全卷积 DenseNet 用于头颈部 CT 和低场 MR 图像多器官分割。
Med Phys. 2019 Jun;46(6):2669-2682. doi: 10.1002/mp.13553. Epub 2019 May 6.
6
Densely connected U-Net retinal vessel segmentation algorithm based on multi-scale feature convolution extraction.基于多尺度特征卷积提取的密集连接 U-Net 视网膜血管分割算法。
Med Phys. 2021 Jul;48(7):3827-3841. doi: 10.1002/mp.14944. Epub 2021 Jun 16.
7
Enhancing classification of cells procured from bone marrow aspirate smears using generative adversarial networks and sequential convolutional neural network.利用生成对抗网络和序列卷积神经网络增强骨髓穿刺涂片获取的细胞分类。
Comput Methods Programs Biomed. 2022 Sep;224:107019. doi: 10.1016/j.cmpb.2022.107019. Epub 2022 Jul 10.
8
Construction of Diagnosis Model of Moyamoya Disease Based on Convolution Neural Network Algorithm.基于卷积神经网络算法的烟雾病诊断模型构建。
Comput Math Methods Med. 2022 Jul 25;2022:4007925. doi: 10.1155/2022/4007925. eCollection 2022.
9
Deep Convolution Neural Network for Malignancy Detection and Classification in Microscopic Uterine Cervix Cell Images.用于子宫颈细胞显微图像中恶性肿瘤检测与分类的深度卷积神经网络
Asian Pac J Cancer Prev. 2019 Nov 1;20(11):3447-3456. doi: 10.31557/APJCP.2019.20.11.3447.
10
Depth Estimation from Light Field Geometry Using Convolutional Neural Networks.基于卷积神经网络的光场几何深度估计
Sensors (Basel). 2021 Sep 10;21(18):6061. doi: 10.3390/s21186061.

引用本文的文献

1
AI-driven predictions of geophysical river flows with vegetation.利用植被进行人工智能驱动的地球物理河流水流预测。
Sci Rep. 2024 Jul 16;14(1):16368. doi: 10.1038/s41598-024-67269-2.

本文引用的文献

1
RGB-D salient object detection: A survey.RGB-D显著目标检测:一项综述。
Comput Vis Media (Beijing). 2021;7(1):37-69. doi: 10.1007/s41095-020-0199-z. Epub 2021 Jan 7.
2
ProDCoNN: Protein design using a convolutional neural network.ProDCoNN:使用卷积神经网络进行蛋白质设计。
Proteins. 2020 Jul;88(7):819-829. doi: 10.1002/prot.25868. Epub 2020 Jan 6.
3
Hybrid Transfer Learning for Classification of Uterine Cervix Images for Cervical Cancer Screening.基于混合迁移学习的宫颈癌筛查子宫颈图像分类
J Digit Imaging. 2020 Jun;33(3):619-631. doi: 10.1007/s10278-019-00269-1.
4
Fully convolutional networks (FCNs)-based segmentation method for colorectal tumors on T2-weighted magnetic resonance images.基于全卷积网络(FCNs)的T2加权磁共振图像上结直肠肿瘤分割方法
Australas Phys Eng Sci Med. 2018 Jun;41(2):393-401. doi: 10.1007/s13246-018-0636-9. Epub 2018 Apr 13.
5
Unraveling flow patterns through nonlinear manifold learning.通过非线性流形学习揭示流动模式。
PLoS One. 2014 Mar 10;9(3):e91131. doi: 10.1371/journal.pone.0091131. eCollection 2014.