Australian Centre for Visual Technologies, University of Adelaide, Adelaide, SA 5005, Australia.
IEEE Trans Image Process. 2012 Mar;21(3):968-82. doi: 10.1109/TIP.2011.2169273. Epub 2011 Sep 23.
We present a new supervised learning model designed for the automatic segmentation of the left ventricle (LV) of the heart in ultrasound images. We address the following problems inherent to supervised learning models: 1) the need of a large set of training images; 2) robustness to imaging conditions not present in the training data; and 3) complex search process. The innovations of our approach reside in a formulation that decouples the rigid and nonrigid detections, deep learning methods that model the appearance of the LV, and efficient derivative-based search algorithms. The functionality of our approach is evaluated using a data set of diseased cases containing 400 annotated images (from 12 sequences) and another data set of normal cases comprising 80 annotated images (from two sequences), where both sets present long axis views of the LV. Using several error measures to compute the degree of similarity between the manual and automatic segmentations, we show that our method not only has high sensitivity and specificity but also presents variations with respect to a gold standard (computed from the manual annotations of two experts) within interuser variability on a subset of the diseased cases. We also compare the segmentations produced by our approach and by two state-of-the-art LV segmentation models on the data set of normal cases, and the results show that our approach produces segmentations that are comparable to these two approaches using only 20 training images and increasing the training set to 400 images causes our approach to be generally more accurate. Finally, we show that efficient search methods reduce up to tenfold the complexity of the method while still producing competitive segmentations. In the future, we plan to include a dynamical model to improve the performance of the algorithm, to use semisupervised learning methods to reduce even more the dependence on rich and large training sets, and to design a shape model less dependent on the training set.
我们提出了一种新的监督学习模型,用于自动分割超声心动图中的左心室(LV)。我们解决了监督学习模型中固有的以下问题:1)需要大量的训练图像;2)对训练数据中未出现的成像条件的鲁棒性;以及 3)复杂的搜索过程。我们方法的创新之处在于一种形式化方法,该方法将刚性和非刚性检测分离、深度学习方法用于建模 LV 的外观以及基于导数的高效搜索算法。我们的方法的功能使用包含 400 个标注图像(来自 12 个序列)的病变病例数据集和包含 80 个标注图像(来自两个序列)的正常病例数据集进行评估,这两个数据集均包含 LV 的长轴视图。使用几个误差度量来计算手动分割和自动分割之间的相似程度,我们表明我们的方法不仅具有高灵敏度和特异性,而且在病变病例的子集上还具有相对于黄金标准(由两个专家的手动注释计算得出)的用户间变异性。我们还比较了正常病例数据集中我们的方法和两种最先进的 LV 分割模型生成的分割结果,结果表明我们的方法仅使用 20 个训练图像就可以产生与这两种方法相当的分割,并且将训练集增加到 400 个图像会使我们的方法总体上更准确。最后,我们表明高效的搜索方法可以将方法的复杂度降低十倍,同时仍能产生有竞争力的分割结果。在未来,我们计划包括一个动态模型来提高算法的性能,使用半监督学习方法进一步减少对丰富和大型训练集的依赖,并设计一个对训练集依赖性较小的形状模型。