Farag Amal, Roth Holger R, Liu Jiamin, Turkbey Evrim, Summers Ronald M
IEEE Trans Image Process. 2017 Jan;26(1):386-399. doi: 10.1109/TIP.2016.2624198. Epub 2016 Nov 1.
Robust organ segmentation is a prerequisite for computer-aided diagnosis, quantitative imaging analysis, pathology detection, and surgical assistance. For organs with high anatomical variability (e.g., the pancreas), previous segmentation approaches report low accuracies, compared with well-studied organs, such as the liver or heart. We present an automated bottom-up approach for pancreas segmentation in abdominal computed tomography (CT) scans. The method generates a hierarchical cascade of information propagation by classifying image patches at different resolutions and cascading (segments) superpixels. The system contains four steps: 1) decomposition of CT slice images into a set of disjoint boundary-preserving superpixels; 2) computation of pancreas class probability maps via dense patch labeling; 3) superpixel classification by pooling both intensity and probability features to form empirical statistics in cascaded random forest frameworks; and 4) simple connectivity based post-processing. Dense image patch labeling is conducted using two methods: efficient random forest classification on image histogram, location and texture features; and more expensive (but more accurate) deep convolutional neural network classification, on larger image windows (i.e., with more spatial contexts). Over-segmented 2-D CT slices by the simple linear iterative clustering approach are adopted through model/parameter calibration and labeled at the superpixel level for positive (pancreas) or negative (non-pancreas or background) classes. The proposed method is evaluated on a data set of 80 manually segmented CT volumes, using six-fold cross-validation. Its performance equals or surpasses other state-of-the-art methods (evaluated by "leave-one-patient-out"), with a dice coefficient of 70.7% and Jaccard index of 57.9%. In addition, the computational efficiency has improved significantly, requiring a mere 6 ~ 8 min per testing case, versus ≥ 10 h for other methods. The segmentation framework using deep patch labeling confidences is also more numerically stable, as reflected in the smaller performance metric standard deviations. Finally, we implement a multi-atlas label fusion (MALF) approach for pancreas segmentation using the same data set. Under six-fold cross-validation, our bottom-up segmentation method significantly outperforms its MALF counterpart: 70.7±13.0% versus 52.51±20.84% in dice coefficients.
强大的器官分割是计算机辅助诊断、定量成像分析、病理检测和手术辅助的前提条件。对于解剖结构变化较大的器官(如胰腺),与肝脏或心脏等研究充分的器官相比,先前的分割方法准确率较低。我们提出了一种用于腹部计算机断层扫描(CT)中胰腺分割的自下而上的自动化方法。该方法通过对不同分辨率的图像块进行分类并级联(分割)超像素来生成信息传播的分层级联。该系统包含四个步骤:1)将CT切片图像分解为一组不相交的保留边界的超像素;2)通过密集块标记计算胰腺类别概率图;3)通过在级联随机森林框架中汇总强度和概率特征以形成经验统计量来进行超像素分类;4)基于简单连通性的后处理。密集图像块标记使用两种方法进行:对图像直方图、位置和纹理特征进行高效随机森林分类;以及对更大的图像窗口(即具有更多空间上下文)进行更昂贵(但更准确)的深度卷积神经网络分类。通过简单线性迭代聚类方法过度分割的二维CT切片通过模型/参数校准被采用,并在超像素级别标记为正(胰腺)或负(非胰腺或背景)类别。所提出的方法在一个包含80个手动分割的CT体积的数据集上使用六折交叉验证进行评估。其性能等于或超过其他现有最先进方法(通过“留一患者法”评估),骰子系数为70.7%,杰卡德指数为57.9%。此外,计算效率有了显著提高,每个测试案例仅需6至8分钟,而其他方法则需要≥10小时。使用深度块标记置信度的分割框架在数值上也更稳定,这体现在性能指标标准差更小。最后,我们使用相同的数据集实现了一种用于胰腺分割的多图谱标签融合(MALF)方法。在六折交叉验证下,我们的自下而上分割方法明显优于其MALF对应方法:骰子系数分别为70.7±13.0%和52.51±20.84%。