Department of Engineering Design, Indian Institute of Technology Madras, India; School of Computer Science, University of Sydney, Sydney, New South Wales, Australia.
School of Computer Science, University of Sydney, Sydney, New South Wales, Australia.
Ultrasound Med Biol. 2019 May;45(5):1259-1273. doi: 10.1016/j.ultrasmedbio.2018.11.016. Epub 2019 Feb 27.
Machine learning for ultrasound image analysis and interpretation can be helpful in automated image classification in large-scale retrospective analyses to objectively derive new indicators of abnormal fetal development that are embedded in ultrasound images. Current approaches to automatic classification are limited to the use of either image patches (cropped images) or the global (whole) image. As many fetal organs have similar visual features, cropped images can misclassify certain structures such as the kidneys and abdomen. Also, the whole image does not encode sufficient local information about structures to identify different structures in different locations. Here we propose a method to automatically classify 14 different fetal structures in 2-D fetal ultrasound images by fusing information from both cropped regions of fetal structures and the whole image. Our method trains two feature extractors by fine-tuning pre-trained convolutional neural networks with the whole ultrasound fetal images and the discriminant regions of the fetal structures found in the whole image. The novelty of our method is in integrating the classification decisions made from the global and local features without relying on priors. In addition, our method can use the classification outcome to localize the fetal structures in the image. Our experiments on a data set of 4074 2-D ultrasound images (training: 3109, test: 965) achieved a mean accuracy of 97.05%, mean precision of 76.47% and mean recall of 75.41%. The Cohen κ of 0.72 revealed the highest agreement between the ground truth and the proposed method. The superiority of the proposed method over the other non-fusion-based methods is statistically significant (p < 0.05). We found that our method is capable of predicting images without ultrasound scanner overlays with a mean accuracy of 92%. The proposed method can be leveraged to retrospectively classify any ultrasound images in clinical research.
机器学习在超声图像分析和解释方面的应用可以帮助在大型回顾性分析中实现自动图像分类,从而客观地从超声图像中提取出异常胎儿发育的新指标。目前的自动分类方法仅限于使用图像补丁(裁剪图像)或整体图像(整个图像)。由于许多胎儿器官具有相似的视觉特征,裁剪图像可能会错误分类某些结构,如肾脏和腹部。此外,整个图像没有编码足够的关于结构的局部信息,无法识别不同位置的不同结构。在这里,我们提出了一种通过融合来自胎儿结构裁剪区域和整个图像的信息来自动分类二维胎儿超声图像中 14 种不同胎儿结构的方法。我们的方法通过使用整个超声胎儿图像和在整个图像中找到的胎儿结构的判别区域微调预先训练的卷积神经网络来训练两个特征提取器。我们方法的新颖之处在于在不依赖先验知识的情况下集成来自全局和局部特征的分类决策。此外,我们的方法可以使用分类结果来定位图像中的胎儿结构。我们在一个包含 4074 张二维超声图像的数据集(训练:3109,测试:965)上进行的实验中,平均准确率为 97.05%,平均精度为 76.47%,平均召回率为 75.41%。κ 值为 0.72 表明了地面实况和提出的方法之间的最高一致性。与其他非融合方法相比,提出的方法具有统计学意义上的优越性(p < 0.05)。我们发现我们的方法能够以平均准确率为 92%的预测没有超声扫描仪叠加的图像。所提出的方法可以在临床研究中用于回顾性地分类任何超声图像。