Arntfield Robert, Wu Derek, Tschirhart Jared, VanBerlo Blake, Ford Alex, Ho Jordan, McCauley Joseph, Wu Benjamin, Deglint Jason, Chaudhary Rushil, Dave Chintan, VanBerlo Bennett, Basmaji John, Millington Scott
Division of Critical Care Medicine, Western University, London, ON N6A 5C1, Canada.
Schulich School of Medicine and Dentistry, Western University, London, ON N6A 5C1, Canada.
Diagnostics (Basel). 2021 Nov 4;11(11):2049. doi: 10.3390/diagnostics11112049.
Lung ultrasound (LUS) is an accurate thoracic imaging technique distinguished by its handheld size, low-cost, and lack of radiation. User dependence and poor access to training have limited the impact and dissemination of LUS outside of acute care hospital environments. Automated interpretation of LUS using deep learning can overcome these barriers by increasing accuracy while allowing point-of-care use by non-experts. In this multicenter study, we seek to automate the clinically vital distinction between A line (normal parenchyma) and B line (abnormal parenchyma) on LUS by training a customized neural network using 272,891 labelled LUS images. After external validation on 23,393 frames, pragmatic clinical application at the clip level was performed on 1162 videos. The trained classifier demonstrated an area under the receiver operating curve (AUC) of 0.96 (±0.02) through 10-fold cross-validation on local frames and an AUC of 0.93 on the external validation dataset. Clip-level inference yielded sensitivities and specificities of 90% and 92% (local) and 83% and 82% (external), respectively, for detecting the B line pattern. This study demonstrates accurate deep-learning-enabled LUS interpretation between normal and abnormal lung parenchyma on ultrasound frames while rendering diagnostically important sensitivity and specificity at the video clip level.
肺部超声(LUS)是一种精确的胸部成像技术,其特点是便于携带、成本低且无辐射。用户依赖性以及培训机会有限,限制了LUS在急性护理医院环境之外的影响力和传播范围。利用深度学习对LUS进行自动解读,可以提高准确性,同时允许非专家在床边使用,从而克服这些障碍。在这项多中心研究中,我们通过使用272,891张标记的LUS图像训练定制神经网络,力求实现LUS上A线(正常实质)和B线(异常实质)这一临床关键区分的自动化。在对23,393帧图像进行外部验证后,对1162个视频进行了剪辑级别的实际临床应用。经过训练的分类器在本地帧上通过10倍交叉验证显示,受试者操作特征曲线下面积(AUC)为0.96(±0.02),在外部验证数据集上的AUC为0.93。对于检测B线模式,剪辑级推理在本地的敏感性和特异性分别为90%和92%,在外部为83%和82%。这项研究证明了在超声帧上能够通过深度学习准确解读正常和异常肺实质的LUS,同时在视频剪辑级别呈现出具有诊断重要性的敏感性和特异性。