• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于多尺度特征融合卷积神经网络的冲击载荷定位

Impact Load Localization Based on Multi-Scale Feature Fusion Convolutional Neural Network.

作者信息

Wu Shiji, Huang Xiufeng, Xu Rongwu, Yu Wenjing, Cheng Guo

机构信息

Laboratory of Vibration and Noise, Naval University of Engineering, Wuhan 430033, China.

National Key Laboratory of Vibration and Noise on Ship, Naval University of Engineering, Wuhan 430033, China.

出版信息

Sensors (Basel). 2024 Sep 19;24(18):6060. doi: 10.3390/s24186060.

DOI:10.3390/s24186060
PMID:39338807
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11435916/
Abstract

In order to achieve impact load localization of complex structures such as ships, this paper proposes a multi-scale feature fusion convolutional neural network (MSFF-CNN) method for impact load localization. An end-to-end machine learning model is used, where the raw vibration signals of impact loads are directly fed into the network model to avoid the process of feature extraction. Automatic feature learning and feature concatenation of the signal are achieved through four independent convolutional layers, each using a different size of convolutional kernel. Data normalization and L2 regularization techniques are introduced to enhance the data and prevent overfitting. Classification and localization of impact loads are accomplished using a softmax classification layer. Validation experiments are carried out using a ship's stern compartment model. Our results show that the classification and localization accuracy of the impact load sample group of MSFF-CNN reaches 94.29% compared with a traditional CNN. The method further improves the ability of the network to extract state features, takes local perception and global vision into account, effectively improves the classification ability of the model, and has good prospects for engineering applications.

摘要

为实现船舶等复杂结构的冲击载荷定位,本文提出一种用于冲击载荷定位的多尺度特征融合卷积神经网络(MSFF-CNN)方法。采用端到端的机器学习模型,将冲击载荷的原始振动信号直接输入网络模型,避免特征提取过程。通过四个独立的卷积层实现信号的自动特征学习和特征拼接,每个卷积层使用不同大小的卷积核。引入数据归一化和L2正则化技术来增强数据并防止过拟合。使用softmax分类层完成冲击载荷的分类和定位。利用船舶艉部舱室模型进行验证实验。结果表明,与传统卷积神经网络相比,MSFF-CNN冲击载荷样本组的分类和定位准确率达到94.29%。该方法进一步提高了网络提取状态特征的能力,兼顾局部感知和全局视野,有效提升了模型的分类能力,具有良好的工程应用前景。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/038f/11435916/fb59a0fd006a/sensors-24-06060-g021.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/038f/11435916/6871ec560a90/sensors-24-06060-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/038f/11435916/4b2d4ecc5f28/sensors-24-06060-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/038f/11435916/e5f8f1e11204/sensors-24-06060-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/038f/11435916/d1a00492f250/sensors-24-06060-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/038f/11435916/94b414cbb2ea/sensors-24-06060-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/038f/11435916/1f1866e109cc/sensors-24-06060-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/038f/11435916/f6820986ec76/sensors-24-06060-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/038f/11435916/fab97ce9923f/sensors-24-06060-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/038f/11435916/2db390f063db/sensors-24-06060-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/038f/11435916/80289d3edfbc/sensors-24-06060-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/038f/11435916/cf60009b0540/sensors-24-06060-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/038f/11435916/f5e2acffbe82/sensors-24-06060-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/038f/11435916/6160da4f4bc7/sensors-24-06060-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/038f/11435916/9a369452799e/sensors-24-06060-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/038f/11435916/adca125004c3/sensors-24-06060-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/038f/11435916/5d0ff2f1afe8/sensors-24-06060-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/038f/11435916/21babf56ea4a/sensors-24-06060-g017a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/038f/11435916/cdffc6c28850/sensors-24-06060-g018a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/038f/11435916/796beeff8b0c/sensors-24-06060-g019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/038f/11435916/cf4d0634aa56/sensors-24-06060-g020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/038f/11435916/fb59a0fd006a/sensors-24-06060-g021.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/038f/11435916/6871ec560a90/sensors-24-06060-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/038f/11435916/4b2d4ecc5f28/sensors-24-06060-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/038f/11435916/e5f8f1e11204/sensors-24-06060-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/038f/11435916/d1a00492f250/sensors-24-06060-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/038f/11435916/94b414cbb2ea/sensors-24-06060-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/038f/11435916/1f1866e109cc/sensors-24-06060-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/038f/11435916/f6820986ec76/sensors-24-06060-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/038f/11435916/fab97ce9923f/sensors-24-06060-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/038f/11435916/2db390f063db/sensors-24-06060-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/038f/11435916/80289d3edfbc/sensors-24-06060-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/038f/11435916/cf60009b0540/sensors-24-06060-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/038f/11435916/f5e2acffbe82/sensors-24-06060-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/038f/11435916/6160da4f4bc7/sensors-24-06060-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/038f/11435916/9a369452799e/sensors-24-06060-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/038f/11435916/adca125004c3/sensors-24-06060-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/038f/11435916/5d0ff2f1afe8/sensors-24-06060-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/038f/11435916/21babf56ea4a/sensors-24-06060-g017a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/038f/11435916/cdffc6c28850/sensors-24-06060-g018a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/038f/11435916/796beeff8b0c/sensors-24-06060-g019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/038f/11435916/cf4d0634aa56/sensors-24-06060-g020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/038f/11435916/fb59a0fd006a/sensors-24-06060-g021.jpg

相似文献

1
Impact Load Localization Based on Multi-Scale Feature Fusion Convolutional Neural Network.基于多尺度特征融合卷积神经网络的冲击载荷定位
Sensors (Basel). 2024 Sep 19;24(18):6060. doi: 10.3390/s24186060.
2
DM-CNN: Dynamic Multi-scale Convolutional Neural Network with uncertainty quantification for medical image classification.DM-CNN:具有不确定性量化的动态多尺度卷积神经网络,用于医学图像分类。
Comput Biol Med. 2024 Jan;168:107758. doi: 10.1016/j.compbiomed.2023.107758. Epub 2023 Nov 29.
3
Effect of dual-convolutional neural network model fusion for Aluminum profile surface defects classification and recognition.双卷积神经网络模型融合对铝合金型材表面缺陷分类识别的影响。
Math Biosci Eng. 2022 Jan;19(1):997-1025. doi: 10.3934/mbe.2022046. Epub 2021 Nov 25.
4
A deep dive into understanding tumor foci classification using multiparametric MRI based on convolutional neural network.基于卷积神经网络,深入探究利用多参数磁共振成像进行肿瘤病灶分类。
Med Phys. 2020 Sep;47(9):4077-4086. doi: 10.1002/mp.14255. Epub 2020 Jun 12.
5
fMRI volume classification using a 3D convolutional neural network robust to shifted and scaled neuronal activations.使用对移位和缩放神经元激活具有鲁棒性的 3D 卷积神经网络进行 fMRI 体积分类。
Neuroimage. 2020 Dec;223:117328. doi: 10.1016/j.neuroimage.2020.117328. Epub 2020 Sep 5.
6
A multi-information fusion anomaly detection model based on convolutional neural networks and AutoEncoder.一种基于卷积神经网络和自动编码器的多信息融合异常检测模型。
Sci Rep. 2024 Jul 12;14(1):16147. doi: 10.1038/s41598-024-66760-0.
7
Electromagnetic Modulation Signal Classification Using Dual-Modal Feature Fusion CNN.基于双模态特征融合卷积神经网络的电磁调制信号分类
Entropy (Basel). 2022 May 15;24(5):700. doi: 10.3390/e24050700.
8
Multi-person feature fusion transfer learning-based convolutional neural network for SSVEP-based collaborative BCI.基于多人特征融合迁移学习的卷积神经网络用于基于稳态视觉诱发电位的协作脑机接口
Front Neurosci. 2022 Jul 26;16:971039. doi: 10.3389/fnins.2022.971039. eCollection 2022.
9
A Novel Deep Learning Method for Intelligent Fault Diagnosis of Rotating Machinery Based on Improved CNN-SVM and Multichannel Data Fusion.基于改进的 CNN-SVM 和多通道数据融合的旋转机械智能故障诊断新型深度学习方法。
Sensors (Basel). 2019 Apr 9;19(7):1693. doi: 10.3390/s19071693.
10
A Novel Bilinear Feature and Multi-Layer Fused Convolutional Neural Network for Tactile Shape Recognition.一种新颖的双线性特征和多层融合卷积神经网络的触觉形状识别方法。
Sensors (Basel). 2020 Oct 15;20(20):5822. doi: 10.3390/s20205822.

引用本文的文献

1
New Method of Impact Localization on Plate-like Structures Using Deep Learning and Wavelet Transform.基于深度学习和小波变换的板状结构冲击定位新方法
Sensors (Basel). 2025 Mar 20;25(6):1926. doi: 10.3390/s25061926.
2
Low-Quality Sensor Data-Based Semi-Supervised Learning for Medical Image Segmentation.基于低质量传感器数据的医学图像分割半监督学习
Sensors (Basel). 2024 Dec 5;24(23):7799. doi: 10.3390/s24237799.

本文引用的文献

1
Towards End-to-End Acoustic Localization Using Deep Learning: From Audio Signals to Source Position Coordinates.端到端声学定位的深度学习方法:从音频信号到声源坐标。
Sensors (Basel). 2018 Oct 12;18(10):3418. doi: 10.3390/s18103418.
2
Localizing two acoustic emission sources simultaneously using beamforming and singular value decomposition.使用波束形成和奇异值分解同时定位两个声发射源。
Ultrasonics. 2018 Apr;85:3-22. doi: 10.1016/j.ultras.2017.10.019. Epub 2017 Oct 25.
3
A novel acoustic emission beamforming method with two uniform linear arrays on plate-like structures.
一种基于板状结构上的两个均匀线阵的新型声发射波束形成方法。
Ultrasonics. 2014 Feb;54(2):737-45. doi: 10.1016/j.ultras.2013.09.020. Epub 2013 Oct 2.