Suppr超能文献

用于分割心音图信号的深度学习模型:一项比较研究。

Deep learning models for segmenting phonocardiogram signals: a comparative study.

作者信息

Alquran Hiam, Al-Issa Yazan, Alsalatie Mohammed, Tawalbeh Shefa

机构信息

Department of Biomedical Systems and Informatics Engineering, Yarmouk University, Irbid, Jordan.

Department of Computer Engineering, Yarmouk University, Irbid, Jordan.

出版信息

PLoS One. 2025 Apr 14;20(4):e0320297. doi: 10.1371/journal.pone.0320297. eCollection 2025.

Abstract

Cardiac auscultation requires the mechanical vibrations occurring on the body's surface, which carries a range of sound frequencies. These sounds are generated by the movement and pulsation of different cardiac structures as they facilitate blood circulation. Subsequently, these sounds are identified as phonocardiogram (PCG). In this research, deep learning models, namely gated recurrent neural Network (GRU), Bidirectional-GRU, and Bi-directional long-term memory (BILSTM) are applied separately to segment four specific regions within the PCG signal, namely S1 (lub sound), the systolic region, S2 (dub sound), and the diastolic region. These models are applied to three well-known datasets: PhysioNet/Computing in Cardiology Challenge 2016, Massachusetts Institute of Technology (MITHSDB), and CirCor DigiScope Phonocardiogram.The PCG signal underwent a series of pre-processing steps, including digital filtering and empirical mode decomposition, after then deep learning algorithms were applied to achieve the highest level of segmentation accuracy. Remarkably, the proposed approach achieved an accuracy of 97.2% for the PhysioNet dataset and 96.98% for the MITHSDB dataset. Notably, this paper represents the first investigation into the segmentation process of the CirCor DigiScop dataset, achieving an accuracy of 92.5%. This study compared the performance of various deep learning models using the aforementioned datasets, demonstrating its efficiency, accuracy, and reliability as a software tool in healthcare settings.

摘要

心脏听诊需要检测身体表面出现的机械振动,这些振动包含一系列声音频率。这些声音由不同心脏结构的运动和搏动产生,它们推动血液循环。随后,这些声音被识别为心音图(PCG)。在本研究中,深度学习模型,即门控循环神经网络(GRU)、双向GRU和双向长短期记忆(BILSTM),分别应用于心音图信号中的四个特定区域进行分割,即S1(第一心音“lub”声)、收缩期区域、S2(第二心音“dub”声)和舒张期区域。这些模型应用于三个著名的数据集:PhysioNet/2016年心脏病学计算挑战赛、麻省理工学院(MITHSDB)和CirCor DigiScope心音图数据集。心音图信号经过了一系列预处理步骤,包括数字滤波和经验模态分解,之后应用深度学习算法以实现最高水平的分割精度。值得注意的是,所提出的方法在PhysioNet数据集上的准确率达到了97.2%,在MITHSDB数据集上的准确率达到了96.98%。值得一提的是,本文首次对CirCor DigiScop数据集的分割过程进行了研究,准确率达到了92.5%。本研究使用上述数据集比较了各种深度学习模型的性能,证明了其作为医疗保健环境中软件工具的效率、准确性和可靠性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3b2c/11996215/d7d64bffd998/pone.0320297.g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验