Suppr超能文献

使用人工智能对心脏、肺部和肠鸣音进行自动分类及声学听诊

Automatic Classification and Acoustic Auscultation of Heart, Lung, and Bowel Sounds Using Artificial Intelligence.

作者信息

Lin Yen-Sheng, Kapadia Ansh, Ortigoza Eric B

机构信息

Department of Orthopaedic Surgery, UT Southwestern Medical Center, Dallas, TX.

Department of Physical Medicine and Rehabilitation, UT Southwestern Medical Center, Dallas, TX.

出版信息

Res Sq. 2025 Jul 29:rs.3.rs-7061625. doi: 10.21203/rs.3.rs-7061625/v1.

Abstract

Auscultation of heart, lung, and bowel sounds remains a fundamental diagnostic technique in clinical practice despite significant technological advancements in medical imaging. However, the accuracy of auscultation-based diagnoses is highly dependent on clinician experience and expertise, leading to potential diagnostic inconsistencies. The objective of this study is to present a novel artificial intelligence (AI) framework for the automatic classification and acoustic differentiation of heart, lung, and bowel sounds, addressing the need for objective, reproducible diagnostic support tools. Our approach leverages recent advances in supervised machine learning and signal processing to extract distinctive acoustic signatures from publicly available, digitized heart, lung, and bowel sounds. By analyzing spectral, temporal, and morphological features across diverse asymptomatic populations, the algorithm achieves excellent classification of predictive accuracy (65.00% to 91.67%) and validation accuracy (83.87% to 94.62%) from six AI models. The clinical implications of this algorithm show promise beyond diagnostic support to applications in medical education, telemedicine, and continuous patient monitoring. This work contributes to emerging AI-assisted auscultation by providing a comprehensive framework for multi-organ sound classification with the potential to improve differential diagnostic accuracy and standardization in clinical settings.

摘要

尽管医学成像技术取得了重大进展,但心肺和肠鸣音听诊仍然是临床实践中的一项基本诊断技术。然而,基于听诊的诊断准确性高度依赖于临床医生的经验和专业知识,这可能导致诊断不一致。本研究的目的是提出一种新颖的人工智能(AI)框架,用于心肺和肠鸣音的自动分类及声学鉴别,以满足对客观、可重复的诊断支持工具的需求。我们的方法利用了监督机器学习和信号处理的最新进展,从公开可用的数字化心肺和肠鸣音中提取独特的声学特征。通过分析不同无症状人群的频谱、时间和形态特征,该算法在六个AI模型中实现了出色的预测准确率(65.00%至91.67%)和验证准确率(83.87%至94.62%)分类。该算法的临床意义不仅体现在诊断支持方面,还在医学教育、远程医疗和患者连续监测等应用中展现出前景。这项工作通过提供一个用于多器官声音分类的综合框架,为新兴的AI辅助听诊做出了贡献,有可能提高临床环境中的鉴别诊断准确性和标准化程度。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b7be/12324590/2286afd2cab5/nihpp-rs7061625v1-f0001.jpg

相似文献

4
Leveraging a foundation model zoo for cell similarity search in oncological microscopy across devices.
Front Oncol. 2025 Jun 18;15:1480384. doi: 10.3389/fonc.2025.1480384. eCollection 2025.
6
Improving reliability of movement assessment in Parkinson's disease using computer vision-based automated severity estimation.
J Parkinsons Dis. 2025 Mar;15(2):349-360. doi: 10.1177/1877718X241312605. Epub 2025 Feb 13.
7
Artificial intelligence for detecting keratoconus.
Cochrane Database Syst Rev. 2023 Nov 15;11(11):CD014911. doi: 10.1002/14651858.CD014911.pub2.
8
Lung auscultation - today and tomorrow- a narrative review.
Expert Rev Respir Med. 2025 Aug;19(8):879-885. doi: 10.1080/17476348.2025.2511223. Epub 2025 May 26.

本文引用的文献

1
Deep Learning in Heart Sound Analysis: From Techniques to Clinical Applications.
Health Data Sci. 2024 Oct 9;4:0182. doi: 10.34133/hds.0182. eCollection 2024.
2
NeoSSNet: Real-Time Neonatal Chest Sound Separation Using Deep Learning.
IEEE Open J Eng Med Biol. 2024 May 15;5:345-352. doi: 10.1109/OJEMB.2024.3401571. eCollection 2024.
4
AI diagnosis of heart sounds differentiated with super StethoScope.
J Cardiol. 2024 Apr;83(4):265-271. doi: 10.1016/j.jjcc.2023.09.007. Epub 2023 Sep 20.
6
Review on the Advancements of Stethoscope Types in Chest Auscultation.
Diagnostics (Basel). 2023 Apr 25;13(9):1545. doi: 10.3390/diagnostics13091545.
7
Feasibility and basic acoustic characteristics of intelligent long-term bowel sound analysis in term neonates.
Front Pediatr. 2022 Nov 3;10:1000395. doi: 10.3389/fped.2022.1000395. eCollection 2022.
8
Analysis of Heart-Sound Characteristics during Motion Based on a Graphic Representation.
Sensors (Basel). 2021 Dec 28;22(1):181. doi: 10.3390/s22010181.
9
Analysis of Gastrointestinal Acoustic Activity Using Deep Neural Networks.
Sensors (Basel). 2021 Nov 16;21(22):7602. doi: 10.3390/s21227602.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验