Lin Yen-Sheng, Kapadia Ansh, Ortigoza Eric B
Department of Orthopaedic Surgery, UT Southwestern Medical Center, Dallas, TX.
Department of Physical Medicine and Rehabilitation, UT Southwestern Medical Center, Dallas, TX.
Res Sq. 2025 Jul 29:rs.3.rs-7061625. doi: 10.21203/rs.3.rs-7061625/v1.
Auscultation of heart, lung, and bowel sounds remains a fundamental diagnostic technique in clinical practice despite significant technological advancements in medical imaging. However, the accuracy of auscultation-based diagnoses is highly dependent on clinician experience and expertise, leading to potential diagnostic inconsistencies. The objective of this study is to present a novel artificial intelligence (AI) framework for the automatic classification and acoustic differentiation of heart, lung, and bowel sounds, addressing the need for objective, reproducible diagnostic support tools. Our approach leverages recent advances in supervised machine learning and signal processing to extract distinctive acoustic signatures from publicly available, digitized heart, lung, and bowel sounds. By analyzing spectral, temporal, and morphological features across diverse asymptomatic populations, the algorithm achieves excellent classification of predictive accuracy (65.00% to 91.67%) and validation accuracy (83.87% to 94.62%) from six AI models. The clinical implications of this algorithm show promise beyond diagnostic support to applications in medical education, telemedicine, and continuous patient monitoring. This work contributes to emerging AI-assisted auscultation by providing a comprehensive framework for multi-organ sound classification with the potential to improve differential diagnostic accuracy and standardization in clinical settings.
尽管医学成像技术取得了重大进展,但心肺和肠鸣音听诊仍然是临床实践中的一项基本诊断技术。然而,基于听诊的诊断准确性高度依赖于临床医生的经验和专业知识,这可能导致诊断不一致。本研究的目的是提出一种新颖的人工智能(AI)框架,用于心肺和肠鸣音的自动分类及声学鉴别,以满足对客观、可重复的诊断支持工具的需求。我们的方法利用了监督机器学习和信号处理的最新进展,从公开可用的数字化心肺和肠鸣音中提取独特的声学特征。通过分析不同无症状人群的频谱、时间和形态特征,该算法在六个AI模型中实现了出色的预测准确率(65.00%至91.67%)和验证准确率(83.87%至94.62%)分类。该算法的临床意义不仅体现在诊断支持方面,还在医学教育、远程医疗和患者连续监测等应用中展现出前景。这项工作通过提供一个用于多器官声音分类的综合框架,为新兴的AI辅助听诊做出了贡献,有可能提高临床环境中的鉴别诊断准确性和标准化程度。