IEEE J Biomed Health Inform. 2019 Nov;23(6):2435-2445. doi: 10.1109/JBHI.2019.2894222. Epub 2019 Jan 21.
This paper studies the use of deep convolutional neural networks to segment heart sounds into their main components. The proposed methods are based on the adoption of a deep convolutional neural network architecture, which is inspired by similar approaches used for image segmentation. Different temporal modeling schemes are applied to the output of the proposed neural network, which induce the output state sequence to be consistent with the natural sequence of states within a heart sound signal (S1, systole, S2, diastole). In particular, convolutional neural networks are used in conjunction with underlying hidden Markov models and hidden semi-Markov models to infer emission distributions. The proposed approaches are tested on heart sound signals from the publicly available PhysioNet dataset, and they are shown to outperform current state-of-the-art segmentation methods by achieving an average sensitivity of 93.9% and an average positive predictive value of 94% in detecting S1 and S2 sounds.
本文研究了使用深度卷积神经网络将心音分解为其主要成分。所提出的方法基于采用深度卷积神经网络架构,该架构受到用于图像分割的类似方法的启发。不同的时间建模方案应用于所提出的神经网络的输出,这使得输出状态序列与心音信号(S1,收缩期,S2,舒张期)中的状态的自然序列一致。具体来说,卷积神经网络与底层隐马尔可夫模型和隐半马尔可夫模型一起用于推断发射分布。所提出的方法在公开的 PhysioNet 数据集上的心音信号上进行了测试,结果表明,通过实现检测 S1 和 S2 声音的平均灵敏度为 93.9%和平均阳性预测值为 94%,它们优于当前最先进的分割方法。