Ghosh Shrimanti, Felfeliyan Banafshe, Zhou Yuyue, Knight Jessica, Akhlaq Natasha, Kupper Jessica, Hareendranathan Abhilash R, Jaremko Jacob L
Annu Int Conf IEEE Eng Med Biol Soc. 2024 Jul;2024:1-4. doi: 10.1109/EMBC53108.2024.10782185.
Rotator cuff tendon tears, the most common shoulder injuries, are typically diagnosed mainly through MRI, but can also be seen on ultrasound (US), a much less costly test that currently requires highly-trained human expert operators. An AI tool to identify full-thickness rotator cuff tears in US could make this test much more accessible in clinical practice. We propose a two-step approach starting with segmentation and followed by classification. Automatic segmentation of US scans is challenging due to speckle noise and low contrast. We utilized a CNN-autoencoder that predicts boundary contour points of humeral cortex and subacromial bursa directly from raw US images rather than the popular pixel-wise semantic segmentation. Then both the original US image and the corresponding segmentation mask are passed to a classification network (VGG-16) to determine whether tendons are torn or intact. This novel approach only passes the key portions of the scan (in which any tears are most visible) to the classification network, maximizing detection accuracy and clinical relevance. We evaluated this approach on data prospectively acquired from 210 patients, training with 11,600 images and testing with 2900 images. We had an average segmentation Dice coefficient (DC) of 95.3% and Hausdorff Distance (HD) of 2.9 mm, outperforming a U-Net model (DC=90.5%, HD=6.8 mm). The classification network, VGG-16, achieved 85.2% accuracy (sensitivity 84.2%, specificity 83.3%) in classifying supraspinatus tendons as intact or torn from US images. Results indicate that our AI-driven US evaluation pipeline has the potential to enable less-experienced ultrasound users to detect rotator cuff tears with high accuracy and explainability. This can allow more healthcare professionals to conduct scans, improving timely patient access to imaging and streamlining treatment decisions.
肩袖肌腱撕裂是最常见的肩部损伤,通常主要通过磁共振成像(MRI)进行诊断,但也可以在超声(US)检查中发现,超声检查成本低得多,但目前需要训练有素的专业人员操作。一种用于识别超声检查中全层肩袖撕裂的人工智能工具可以使这项检查在临床实践中更容易获得。我们提出了一种两步法,首先进行分割,然后进行分类。由于斑点噪声和低对比度,超声扫描的自动分割具有挑战性。我们使用了一个卷积神经网络自动编码器,它直接从原始超声图像预测肱骨皮质和肩峰下滑囊的边界轮廓点,而不是流行的逐像素语义分割。然后将原始超声图像和相应的分割掩码都传递到一个分类网络(VGG-16),以确定肌腱是否撕裂或完整。这种新方法只将扫描的关键部分(任何撕裂最明显的部分)传递到分类网络,从而最大限度地提高检测准确性和临床相关性。我们对从210名患者前瞻性获取的数据进行了评估,使用11600张图像进行训练,2900张图像进行测试。我们的平均分割骰子系数(DC)为95.3%,豪斯多夫距离(HD)为2.9毫米,优于U-Net模型(DC=90.5%,HD=6.8毫米)。分类网络VGG-16在将超声图像中的冈上肌腱分类为完整或撕裂时,准确率达到85.2%(敏感性84.2%,特异性83.3%)。结果表明,我们的人工智能驱动的超声评估流程有可能使经验不足的超声使用者能够高精度且可解释地检测肩袖撕裂。这可以让更多医疗专业人员进行扫描,改善患者及时获得成像检查的机会并简化治疗决策。