Suppr超能文献

WE-E-213CD-09:使用组织外观模型的多图谱融合

WE-E-213CD-09: Multi-Atlas Fusion Using a Tissue Appearance Model.

作者信息

Yang J, Garden A, Zhang Y, Zhang L, Court L, Dong L

机构信息

UT MD Anderson Cancer Center, Houston, TX.

Scripps Proton Therapy Center, San Diego, CA.

出版信息

Med Phys. 2012 Jun;39(6Part27):3961. doi: 10.1118/1.4736165.

Abstract

PURPOSE

To improve multi-atlas based auto-segmentation by integrating a tissue appearance model with the Simultaneous Truth and Performance Level Estimation (STAPLE) algorithm to perform multi-atlas fusion.

METHODS

Ten head-and-neck planning CT images were acquired (resolution: 1.0×1.0×2.5mm ) and the parotid glands were contoured manually by a head-and-neck oncologist. We performed 10 leave-one-out tests by using one patient as test patient and the rest of 9 patients as atlases. Deformable registration was first applied to transform the atlas parotid contours to the test image one by one. The STAPLE algorithm was initialized by a parotid tissue appearance model, which was estimated from the test image and encoded the intensity information of parotid glands. The individual deformed contours were then fused using the STAPLE algorithm to produce a best approximation of the true contour. The tissue appearance model was also applied to a deformable model segmentation to further refine the fused contours.

RESULTS

The multi-atlas fusion using the tissue appearance model produced an average Dice coefficient of 85.2%±3.1% (left parotid) and 84.9%±3.9% (right parotid) over the 10 tests between the auto-contour and the manual contour, and an average mean surface distance of 1.6±0.3mm and 1.6±0.4mm for left and right parotids respectively. This demonstrated a good agreement between the manual contours and the auto- delineated contours. Our results also showed that, without using the tissue appearance model, the auto-delineated parotid contours might include nearby bony structures; however, using the appearance model was able to correct this problem.

CONCLUSIONS

Including the intensity information using a tissue appearance model into STAPLE algorithm for multi-atlas fusion showed improvement in refining the anatomical boundaries in the multi- atlas based auto-segmentation.

摘要

目的

通过将组织外观模型与同时真值和性能水平估计(STAPLE)算法相结合来执行多图谱融合,以改进基于多图谱的自动分割。

方法

获取了10例头颈部计划CT图像(分辨率:1.0×1.0×2.5mm),并由一位头颈部肿瘤学家手动勾勒出腮腺轮廓。我们进行了10次留一法测试,将一名患者作为测试患者,其余9名患者作为图谱。首先应用可变形配准将图谱腮腺轮廓逐个转换到测试图像上。STAPLE算法由腮腺组织外观模型初始化,该模型从测试图像中估计得出,并编码了腮腺的强度信息。然后使用STAPLE算法融合各个变形后的轮廓,以生成真实轮廓的最佳近似值。组织外观模型也应用于可变形模型分割,以进一步细化融合后的轮廓。

结果

在10次测试中,使用组织外观模型的多图谱融合在自动轮廓与手动轮廓之间,左腮腺的平均骰子系数为85.2%±3.1%,右腮腺为84.9%±3.9%,左、右腮腺的平均平均表面距离分别为1.6±0.3mm和1.6±0.4mm。这表明手动轮廓与自动勾勒的轮廓之间具有良好的一致性。我们的结果还表明,不使用组织外观模型时,自动勾勒的腮腺轮廓可能会包括附近的骨性结构;然而,使用外观模型能够纠正这个问题。

结论

将使用组织外观模型的强度信息纳入STAPLE算法进行多图谱融合,在基于多图谱的自动分割中细化解剖边界方面显示出改进。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验