Department of Information Engineering and Computer Science, University of Trento, Trento, Italy.
Fondazione IRCCS San Gerardo Dei Tintori Monza, Italy.
Comput Biol Med. 2024 Dec;183:109315. doi: 10.1016/j.compbiomed.2024.109315. Epub 2024 Nov 5.
Neonatal respiratory disorders pose significant challenges in clinical settings, often requiring rapid and accurate diagnostic solutions for effective management. Lung ultrasound (LUS) has emerged as a promising tool to evaluate respiratory conditions in neonates. This evaluation is mainly based on the interpretation of visual patterns (horizontal artifacts, vertical artifacts, and consolidations). Automated interpretation of these patterns can assist clinicians in their evaluations. However, developing AI-based solutions for this purpose is challenging, primarily due to the lack of annotated data and inherent subjectivity in expert interpretations. This study aims to propose an automated solution for the reliable interpretation of patterns in LUS videos of newborns. We employed two distinct strategies. The first strategy is a frame-to-video-level approach that computes frame-level predictions from deep learning (DL) models trained from scratch (F2V-TS) along with fine-tuning pre-trained models (F2V-FT) followed by aggregation of those predictions for video-level evaluation. The second strategy is a direct video classification approach (DV) for evaluating LUS data. To evaluate our methods, we used LUS data from 34 neonatal patients comprising of 70 exams with annotations provided by three expert human operators (3HOs). Results show that within the frame-to-video-level approach, F2V-FT achieved the best performance with an accuracy of 77% showing moderate agreement with the 3HOs. while the direct video classification approach resulted in an accuracy of 72%, showing substantial agreement with the 3HOs, our proposed study lays down the foundation for reliable AI-based solutions for newborn LUS data evaluation.
新生儿呼吸障碍在临床环境中构成重大挑战,通常需要快速准确的诊断解决方案以进行有效管理。肺部超声(LUS)已成为评估新生儿呼吸状况的有前途的工具。这种评估主要基于对视觉模式(水平伪影、垂直伪影和实变)的解释。这些模式的自动解释可以帮助临床医生进行评估。然而,开发用于此目的的人工智能解决方案具有挑战性,主要是由于缺乏带注释的数据和专家解释中的固有主观性。本研究旨在提出一种用于可靠解释新生儿肺部超声视频中模式的自动化解决方案。我们采用了两种不同的策略。第一种策略是从深度学习(DL)模型中逐帧到视频级别的方法,该模型从头开始训练(F2V-TS),并对预训练模型进行微调(F2V-FT),然后对这些预测进行聚合,以进行视频级别的评估。第二种策略是直接用于评估 LUS 数据的视频分类方法(DV)。为了评估我们的方法,我们使用了来自 34 名新生儿患者的 LUS 数据,其中包括 70 次检查,由三名专家操作员(3HO)提供注释。结果表明,在逐帧到视频级别的方法中,F2V-FT 表现最佳,准确率为 77%,与 3HO 有中度一致性。而直接视频分类方法的准确率为 72%,与 3HO 有高度一致性,我们的研究为基于人工智能的新生儿 LUS 数据评估的可靠解决方案奠定了基础。