Park Sang Hyun, Gao Yaozong, Shi Yinghuan, Shen Dinggang
Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599.
Department of Computer Science, Department of Radiology, and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599.
Med Phys. 2014 Nov;41(11):111715. doi: 10.1118/1.4898200.
Accurate prostate segmentation is necessary for maximizing the effectiveness of radiation therapy of prostate cancer. However, manual segmentation from 3D CT images is very time-consuming and often causes large intra- and interobserver variations across clinicians. Many segmentation methods have been proposed to automate this labor-intensive process, but tedious manual editing is still required due to the limited performance. In this paper, the authors propose a new interactive segmentation method that can (1) flexibly generate the editing result with a few scribbles or dots provided by a clinician, (2) fast deliver intermediate results to the clinician, and (3) sequentially correct the segmentations from any type of automatic or interactive segmentation methods.
The authors formulate the editing problem as a semisupervised learning problem which can utilize a priori knowledge of training data and also the valuable information from user interactions. Specifically, from a region of interest near the given user interactions, the appropriate training labels, which are well matched with the user interactions, can be locally searched from a training set. With voting from the selected training labels, both confident prostate and background voxels, as well as unconfident voxels can be estimated. To reflect informative relationship between voxels, location-adaptive features are selected from the confident voxels by using regression forest and Fisher separation criterion. Then, the manifold configuration computed in the derived feature space is enforced into the semisupervised learning algorithm. The labels of unconfident voxels are then predicted by regularizing semisupervised learning algorithm.
The proposed interactive segmentation method was applied to correct automatic segmentation results of 30 challenging CT images. The correction was conducted three times with different user interactions performed at different time periods, in order to evaluate both the efficiency and the robustness. The automatic segmentation results with the original average Dice similarity coefficient of 0.78 were improved to 0.865-0.872 after conducting 55-59 interactions by using the proposed method, where each editing procedure took less than 3 s. In addition, the proposed method obtained the most consistent editing results with respect to different user interactions, compared to other methods.
The proposed method obtains robust editing results with few interactions for various wrong segmentation cases, by selecting the location-adaptive features and further imposing the manifold regularization. The authors expect the proposed method to largely reduce the laborious burdens of manual editing, as well as both the intra- and interobserver variability across clinicians.
准确的前列腺分割对于最大化前列腺癌放射治疗的效果至关重要。然而,从三维CT图像进行手动分割非常耗时,并且常常在临床医生之间导致较大的观察者内和观察者间差异。已经提出了许多分割方法来自动化这一劳动密集型过程,但由于性能有限,仍然需要繁琐的手动编辑。在本文中,作者提出了一种新的交互式分割方法,该方法可以:(1)根据临床医生提供的几笔涂鸦或几个点灵活生成编辑结果;(2)快速将中间结果提供给临床医生;(3)依次校正来自任何类型的自动或交互式分割方法的分割结果。
作者将编辑问题表述为一个半监督学习问题,该问题可以利用训练数据的先验知识以及来自用户交互的有价值信息。具体而言,从给定用户交互附近的感兴趣区域,可以从训练集中局部搜索与用户交互良好匹配的适当训练标签。通过所选训练标签的投票,可以估计出确定的前列腺和背景体素以及不确定的体素。为了反映体素之间的信息关系,使用回归森林和Fisher分离准则从确定的体素中选择位置自适应特征。然后,将在导出的特征空间中计算出的流形配置应用于半监督学习算法。然后通过正则化半监督学习算法预测不确定体素的标签。
将所提出的交互式分割方法应用于校正30幅具有挑战性的CT图像的自动分割结果。为了评估效率和鲁棒性,在不同时间段进行了三次不同用户交互的校正。使用所提出的方法进行55 - 59次交互后,原始平均骰子相似系数为0.78的自动分割结果提高到了0.865 - 0.872,其中每个编辑过程耗时不到3秒。此外,与其他方法相比,所提出的方法在不同用户交互方面获得了最一致的编辑结果。
通过选择位置自适应特征并进一步施加流形正则化,所提出的方法在各种错误分割情况下只需很少的交互就能获得稳健的编辑结果。作者期望所提出的方法能大大减轻手动编辑的繁重负担,以及临床医生之间的观察者内和观察者间变异性。