Suppr超能文献

基于注意力的编码器-解码器架构上使用迁移学习技术的呼吸音分割与检测

Breathing Sound Segmentation and Detection Using Transfer Learning Techniques on an Attention-Based Encoder-Decoder Architecture.

作者信息

Hsiao Chiu-Han, Lin Ting-Wei, Lin Chii-Wann, Hsu Fu-Shun, Lin Frank Yeong-Sung, Chen Chung-Wei, Chung Chi-Ming

出版信息

Annu Int Conf IEEE Eng Med Biol Soc. 2020 Jul;2020:754-759. doi: 10.1109/EMBC44109.2020.9176226.

Abstract

This paper focuses on the use of an attention-based encoder-decoder model for the task of breathing sound segmentation and detection. This study aims to accurately segment the inspiration and expiration of patients with pulmonary diseases using the proposed model. Spectrograms of the lung sound signals and labels for every time segment were used to train the model. The model would first encode the spectrogram and then detect inspiratory or expiratory sounds using the encoded image on an attention-based decoder. Physicians would be able to make a more precise diagnosis based on the more interpretable outputs with the assistance of the attention mechanism.The respiratory sounds used for training and testing were recorded from 22 participants using digital stethoscopes or anti-noising microphone sets. Experimental results showed a high 92.006% accuracy when applied 0.5 second time segments and ResNet101 as encoder. Consistent performance of the proposed method can be observed from ten-fold cross-validation experiments.

摘要

本文聚焦于使用基于注意力的编码器-解码器模型来完成呼吸音分割与检测任务。本研究旨在利用所提出的模型准确分割患有肺部疾病患者的吸气和呼气。肺音信号的频谱图以及每个时间段的标签被用于训练模型。该模型首先会对频谱图进行编码,然后在基于注意力的解码器上使用编码后的图像来检测吸气或呼气声音。借助注意力机制,医生能够基于更具可解释性的输出做出更精确的诊断。用于训练和测试的呼吸音是使用数字听诊器或抗噪麦克风组从22名参与者身上记录的。实验结果表明,当应用0.5秒的时间段并使用ResNet101作为编码器时,准确率高达92.006%。从十折交叉验证实验中可以观察到所提方法的一致性能。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验