• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于奇异谱分析和通道注意力卷积神经网络的水下目标识别方法

Underwater Target Recognition Method Based on Singular Spectrum Analysis and Channel Attention Convolutional Neural Network.

作者信息

Ji Fang, Lu Shaoqing, Ni Junshuai, Li Ziming, Feng Weijia

机构信息

China Ship Research and Development Academy, Beijing 100101, China.

出版信息

Sensors (Basel). 2025 Apr 18;25(8):2573. doi: 10.3390/s25082573.

DOI:10.3390/s25082573
PMID:40285261
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12031622/
Abstract

In order to improve the efficiency of the deep network model in processing the radiated noise signals of underwater acoustic targets, this paper introduces a Singular Spectrum Analysis and Channel Attention Convolutional Neural Network (SSA-CACNN) model. The front end of the model is designed as an SSA filter, and its input is the time-domain signal that has undergone simple preprocessing. The SSA method is utilized to separate the noise efficiently and reliably from useful signals. The first three orders of useful signals are then fed into the CACNN model, which has a convolutional layer set up at the beginning of the model to further remove noise from the signal. Then, the attention of the model to the feature signal channels is enhanced through the combination of multiple groups of convolutional operations and the channel attention mechanism, which facilitates the model's ability to discern the essential characteristics of the underwater acoustic signals and improve the target recognition rate. Experimental Results: The signal reconstructed by the first three-order waveforms at the front end of the SSA-CACNN model proposed in this paper can retain most of the features of the target. In the experimental verification using the ShipsEar dataset, the model achieved a recognition accuracy of 98.64%. The model's parameter count of 0.26 M was notably lower than that of other comparable deep models, indicating a more efficient use of resources. Additionally, the SSA-CACNN model had a certain degree of robustness to noise, with a correct recognition rate of 84.61% maintained when the signal-to-noise ratio (SNR) was -10 dB. Finally, the pre-trained SSA-CACNN model on the ShipsEar dataset was transferred to the DeepShip dataset with a recognition accuracy of 94.98%.

摘要

为了提高深度网络模型处理水下声学目标辐射噪声信号的效率,本文介绍了一种奇异谱分析与通道注意力卷积神经网络(SSA-CACNN)模型。该模型前端设计为SSA滤波器,其输入是经过简单预处理的时域信号。利用SSA方法从有用信号中高效可靠地分离出噪声。然后将有用信号的前三个阶次输入到CACNN模型中,该模型在开始处设置了卷积层以进一步去除信号中的噪声。接着,通过多组卷积运算与通道注意力机制相结合,增强模型对特征信号通道的注意力,这有助于模型辨别水下声学信号的本质特征并提高目标识别率。实验结果:本文提出的SSA-CACNN模型前端由前三阶波形重建的信号能够保留目标的大部分特征。在使用ShipsEar数据集的实验验证中,该模型实现了98.64%的识别准确率。该模型的参数数量为0.26M,明显低于其他可比深度模型,表明资源利用效率更高。此外,SSA-CACNN模型对噪声具有一定的鲁棒性,当信噪比(SNR)为-10dB时,正确识别率保持在84.61%。最后,在ShipsEar数据集上预训练的SSA-CACNN模型迁移到DeepShip数据集上,识别准确率为94.98%。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0607/12031622/66cc6982a433/sensors-25-02573-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0607/12031622/9eadedb2860d/sensors-25-02573-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0607/12031622/51376d463a25/sensors-25-02573-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0607/12031622/38884768cd17/sensors-25-02573-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0607/12031622/612b0c9e77b6/sensors-25-02573-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0607/12031622/c9081e7ca6d0/sensors-25-02573-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0607/12031622/53d13861524c/sensors-25-02573-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0607/12031622/cb4164eff1fd/sensors-25-02573-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0607/12031622/7ff268e816b9/sensors-25-02573-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0607/12031622/00f520e00ce0/sensors-25-02573-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0607/12031622/d4c5b7742957/sensors-25-02573-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0607/12031622/8d233064fc3a/sensors-25-02573-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0607/12031622/b30014dbce21/sensors-25-02573-g012a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0607/12031622/187f22b2685b/sensors-25-02573-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0607/12031622/66cc6982a433/sensors-25-02573-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0607/12031622/9eadedb2860d/sensors-25-02573-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0607/12031622/51376d463a25/sensors-25-02573-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0607/12031622/38884768cd17/sensors-25-02573-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0607/12031622/612b0c9e77b6/sensors-25-02573-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0607/12031622/c9081e7ca6d0/sensors-25-02573-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0607/12031622/53d13861524c/sensors-25-02573-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0607/12031622/cb4164eff1fd/sensors-25-02573-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0607/12031622/7ff268e816b9/sensors-25-02573-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0607/12031622/00f520e00ce0/sensors-25-02573-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0607/12031622/d4c5b7742957/sensors-25-02573-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0607/12031622/8d233064fc3a/sensors-25-02573-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0607/12031622/b30014dbce21/sensors-25-02573-g012a.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0607/12031622/187f22b2685b/sensors-25-02573-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0607/12031622/66cc6982a433/sensors-25-02573-g014.jpg

相似文献

1
Underwater Target Recognition Method Based on Singular Spectrum Analysis and Channel Attention Convolutional Neural Network.基于奇异谱分析和通道注意力卷积神经网络的水下目标识别方法
Sensors (Basel). 2025 Apr 18;25(8):2573. doi: 10.3390/s25082573.
2
Underwater Acoustic Target Recognition Based on Attention Residual Network.基于注意力残差网络的水下声学目标识别
Entropy (Basel). 2022 Nov 15;24(11):1657. doi: 10.3390/e24111657.
3
A Novel Underwater Acoustic Target Recognition Method Based on MFCC and RACNN.一种基于MFCC和RACNN的新型水下声学目标识别方法
Sensors (Basel). 2024 Jan 2;24(1):273. doi: 10.3390/s24010273.
4
A Deep Convolutional Neural Network Inspired by Auditory Perception for Underwater Acoustic Target Recognition.基于听觉感知的深度卷积神经网络在水下目标识别中的应用
Sensors (Basel). 2019 Mar 4;19(5):1104. doi: 10.3390/s19051104.
5
End-to-end underwater acoustic source separation model based on EDBG-GALR.基于EDBG-GALR的端到端水下声源分离模型。
Sci Rep. 2024 Oct 21;14(1):24748. doi: 10.1038/s41598-024-76602-8.
6
Underwater Acoustic Target Recognition Based on Depthwise Separable Convolution Neural Networks.基于深度可分离卷积神经网络的水下声纳目标识别
Sensors (Basel). 2021 Feb 18;21(4):1429. doi: 10.3390/s21041429.
7
Underwater single-channel acoustic signal multitarget recognition using convolutional neural networks.基于卷积神经网络的水下单通道声信号多目标识别
J Acoust Soc Am. 2022 Mar;151(3):2245. doi: 10.1121/10.0009852.
8
Design and Performance Evaluation of a Deep Neural Network for Spectrum Recognition of Underwater Targets.基于深度神经网络的水下目标频谱识别的设计与性能评估。
Comput Intell Neurosci. 2020 Aug 1;2020:8848507. doi: 10.1155/2020/8848507. eCollection 2020.
9
Multi-Stream Convolutional Neural Network-Based Wearable, Flexible Bionic Gesture Surface Muscle Feature Extraction and Recognition.基于多流卷积神经网络的可穿戴柔性仿生手势表面肌肉特征提取与识别
Front Bioeng Biotechnol. 2022 Mar 3;10:833793. doi: 10.3389/fbioe.2022.833793. eCollection 2022.
10
Advancing robust underwater acoustic target recognition through multitask learning and multi-gate mixture of experts.通过多任务学习和多门专家混合推进稳健的水下声学目标识别
J Acoust Soc Am. 2024 Jul 1;156(1):244-255. doi: 10.1121/10.0026481.

本文引用的文献

1
Underwater Acoustic Target Recognition Based on Attention Residual Network.基于注意力残差网络的水下声学目标识别
Entropy (Basel). 2022 Nov 15;24(11):1657. doi: 10.3390/e24111657.
2
Adaptive extraction of modulation for cavitation noise.自适应提取空化噪声调制。
J Acoust Soc Am. 2009 Dec;126(6):3106-13. doi: 10.1121/1.3244987.