Liu Jiaqi, Huo Yuankai, Xu Zhoubing, Assad Albert, Abramson Richard G, Landman Bennett A
Computer Science, Vanderbilt University, Nashville, TN, USA 37235.
Electrical Engineering, Vanderbilt University, Nashville, TN, USA 37235.
Proc SPIE Int Soc Opt Eng. 2017 Feb 11;10133. doi: 10.1117/12.2254437. Epub 2017 Feb 24.
Automatic spleen segmentation on CT is challenging due to the complexity of abdominal structures. Multi-atlas segmentation (MAS) has shown to be a promising approach to conduct spleen segmentation. To deal with the substantial registration errors between the heterogeneous abdominal CT images, the context learning method for performance level estimation (CLSIMPLE) method was previously proposed. The context learning method generates a probability map for a target image using a Gaussian mixture model (GMM) as the prior in a Bayesian framework. However, the CLSSIMPLE typically trains a single GMM from the entire heterogeneous training atlas set. Therefore, the estimated spatial prior maps might not represent specific target images accurately. Rather than using all training atlases, we propose an adaptive GMM based context learning technique (AGMMCL) to train the GMM adaptively using subsets of the training data with the subsets tailored for different target images. Training sets are selected adaptively based on the similarity between atlases and the target images using cranio-caudal length, which is derived manually from the target image. To validate the proposed method, a heterogeneous dataset with a large variation of spleen sizes (100 cc to 9000 cc) is used. We designate a metric of size to differentiate each group of spleens, with 0 to 100 cc as small, 200 to 500cc as medium, 500 to 1000 cc as large, 1000 to 2000 cc as XL, and 2000 and above as XXL. From the results, AGMMCL leads to more accurate spleen segmentations by training GMMs adaptively for different target images.
由于腹部结构的复杂性,在CT图像上自动分割脾脏具有挑战性。多图谱分割(MAS)已被证明是一种有前景的脾脏分割方法。为了解决异构腹部CT图像之间的大量配准误差,之前提出了性能水平估计的上下文学习方法(CLSIMPLE)。上下文学习方法在贝叶斯框架中使用高斯混合模型(GMM)作为先验,为目标图像生成概率图。然而,CLSSIMPLE通常从整个异构训练图谱集中训练单个GMM。因此,估计的空间先验图可能无法准确表示特定的目标图像。我们提出了一种基于自适应GMM的上下文学习技术(AGMMCL),不是使用所有训练图谱,而是使用针对不同目标图像量身定制的训练数据子集来自适应地训练GMM。使用从目标图像手动得出的头尾长度,根据图谱与目标图像之间的相似性自适应地选择训练集。为了验证所提出的方法,使用了一个脾脏大小变化很大(100立方厘米至9000立方厘米)的异构数据集。我们指定了一个大小度量来区分每组脾脏,0至100立方厘米为小,200至500立方厘米为中,500至1000立方厘米为大,1000至2000立方厘米为特大,2000及以上为超特大。从结果来看,AGMMCL通过为不同目标图像自适应地训练GMM,实现了更准确的脾脏分割。