Suppr超能文献

基于深度神经网络的第一心音和第二心音识别

S1 and S2 Heart Sound Recognition Using Deep Neural Networks.

作者信息

Chen Tien-En, Yang Shih-I, Ho Li-Ting, Tsai Kun-Hsi, Chen Yu-Hsuan, Chang Yun-Fan, Lai Ying-Hui, Wang Syu-Siang, Tsao Yu, Wu Chau-Chung

出版信息

IEEE Trans Biomed Eng. 2017 Feb;64(2):372-380. doi: 10.1109/TBME.2016.2559800.

Abstract

OBJECTIVE

This study focuses on the first (S1) and second (S2) heart sound recognition based only on acoustic characteristics; the assumptions of the individual durations of S1 and S2 and time intervals of S1-S2 and S2-S1 are not involved in the recognition process. The main objective is to investigate whether reliable S1 and S2 recognition performance can still be attained under situations where the duration and interval information might not be accessible.

METHODS

A deep neural network (DNN) method is proposed for recognizing S1 and S2 heart sounds. In the proposed method, heart sound signals are first converted into a sequence of Mel-frequency cepstral coefficients (MFCCs). The K-means algorithm is applied to cluster MFCC features into two groups to refine their representation and discriminative capability. The refined features are then fed to a DNN classifier to perform S1 and S2 recognition. We conducted experiments using actual heart sound signals recorded using an electronic stethoscope. Precision, recall, F-measure, and accuracy are used as the evaluation metrics.

RESULTS

The proposed DNN-based method can achieve high precision, recall, and F-measure scores with more than 91% accuracy rate.

CONCLUSION

The DNN classifier provides higher evaluation scores compared with other well-known pattern classification methods.

SIGNIFICANCE

The proposed DNN-based method can achieve reliable S1 and S2 recognition performance based on acoustic characteristics without using an ECG reference or incorporating the assumptions of the individual durations of S1 and S2 and time intervals of S1-S2 and S2-S1.

摘要

目的

本研究专注于仅基于声学特征的第一心音(S1)和第二心音(S2)识别;识别过程不涉及S1和S2的个体持续时间以及S1 - S2和S2 - S1时间间隔的假设。主要目的是研究在持续时间和间隔信息可能无法获取的情况下,是否仍能实现可靠的S1和S2识别性能。

方法

提出一种深度神经网络(DNN)方法用于识别S1和S2心音。在所提出的方法中,心音信号首先被转换为梅尔频率倒谱系数(MFCC)序列。应用K均值算法将MFCC特征聚类为两组,以细化其表示和判别能力。然后将细化后的特征输入到DNN分类器中进行S1和S2识别。我们使用电子听诊器记录的实际心音信号进行了实验。精度、召回率、F值和准确率用作评估指标。

结果

所提出的基于DNN的方法能够实现高精度、召回率和F值分数,准确率超过91%。

结论

与其他知名模式分类方法相比,DNN分类器提供了更高的评估分数。

意义

所提出的基于DNN的方法能够基于声学特征实现可靠的S1和S2识别性能,而无需使用心电图参考或纳入S1和S2的个体持续时间以及S1 - S2和S2 - S1时间间隔的假设。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验