Philips Research North America, Briarcliff Manor, NY 10510, USA.
IEEE Trans Biomed Eng. 2010 May;57(5):1158-66. doi: 10.1109/TBME.2009.2037491. Epub 2010 Feb 5.
Automatic prostate segmentation in transrectal ultrasound (TRUS) images is highly desired in many clinical applications. However, robust and automated prostate segmentation is challenging due to the low SNR in TRUS and the missing boundaries in shadow areas caused by calcifications or hyperdense prostate tissues. This paper presents a novel method of utilizing a priori shapes estimated from partial contours for segmenting the prostate. The proposed method is able to automatically extract prostate boundary from 2-D TRUS images without user interaction for shape correction in shadow areas. During the segmentation process, missing boundaries in shadow areas are estimated by using a partial active shape model, which takes partial contours as input but returns a complete shape estimation. With this shape guidance, an optimal search is performed by a discrete deformable model to minimize an energy functional for image segmentation, which is achieved efficiently by using dynamic programming. The segmentation of an image is executed in a multiresolution fashion from coarse to fine for robustness and computational efficiency. Promising segmentation results were demonstrated on 301 TRUS images grabbed from 19 patients with the average mean absolute distance error of 2.01 mm +/- 1.02 mm.
在许多临床应用中,人们非常希望能够在经直肠超声(TRUS)图像中实现自动前列腺分割。然而,由于 TRUS 中的低信噪比以及由于钙化或高密度前列腺组织导致的阴影区域边界缺失,稳健且自动化的前列腺分割具有挑战性。本文提出了一种利用从部分轮廓估计的先验形状来分割前列腺的新方法。该方法能够在没有用户交互的情况下从 2-D TRUS 图像中自动提取前列腺边界,以纠正阴影区域的形状。在分割过程中,通过使用部分主动形状模型来估计阴影区域中的缺失边界,该模型将部分轮廓作为输入,但返回完整的形状估计。通过这种形状指导,通过离散可变形模型进行最佳搜索,以最小化图像分割的能量函数,这可以通过动态编程高效地实现。图像分割以从粗到细的多分辨率方式执行,以提高鲁棒性和计算效率。在 19 名患者的 301 个 TRUS 图像上进行了有希望的分割结果演示,平均绝对距离误差为 2.01 毫米 +/- 1.02 毫米。