Wang Huiqian, Udupa Jayaram K, Odhner Dewey, Tong Yubing, Zhao Liming, Torigian Drew A
College of Optoelectronic Engineering, Chongqing University, Chongqing 400044, China and Medical Image Processing Group Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania 19104.
Medical Image Processing Group Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania 19104.
Med Phys. 2016 Jan;43(1):613. doi: 10.1118/1.4939127.
Whole-body positron emission tomography/computed tomography (PET/CT) has become a standard method of imaging patients with various disease conditions, especially cancer. Body-wide accurate quantification of disease burden in PET/CT images is important for characterizing lesions, staging disease, prognosticating patient outcome, planning treatment, and evaluating disease response to therapeutic interventions. However, body-wide anatomy recognition in PET/CT is a critical first step for accurately and automatically quantifying disease body-wide, body-region-wise, and organwise. This latter process, however, has remained a challenge due to the lower quality of the anatomic information portrayed in the CT component of this imaging modality and the paucity of anatomic details in the PET component. In this paper, the authors demonstrate the adaptation of a recently developed automatic anatomy recognition (AAR) methodology [Udupa et al., "Body-wide hierarchical fuzzy modeling, recognition, and delineation of anatomy in medical images," Med. Image Anal. 18, 752-771 (2014)] to PET/CT images. Their goal was to test what level of object localization accuracy can be achieved on PET/CT compared to that achieved on diagnostic CT images.
The authors advance the AAR approach in this work in three fronts: (i) from body-region-wise treatment in the work of Udupa et al. to whole body; (ii) from the use of image intensity in optimal object recognition in the work of Udupa et al. to intensity plus object-specific texture properties, and (iii) from the intramodality model-building-recognition strategy to the intermodality approach. The whole-body approach allows consideration of relationships among objects in different body regions, which was previously not possible. Consideration of object texture allows generalizing the previous optimal threshold-based fuzzy model recognition method from intensity images to any derived fuzzy membership image, and in the process, to bring performance to the level achieved on diagnostic CT and MR images in body-region-wise approaches. The intermodality approach fosters the use of already existing fuzzy models, previously created from diagnostic CT images, on PET/CT and other derived images, thus truly separating the modality-independent object assembly anatomy from modality-specific tissue property portrayal in the image.
Key ways of combining the above three basic ideas lead them to 15 different strategies for recognizing objects in PET/CT images. Utilizing 50 diagnostic CT image data sets from the thoracic and abdominal body regions and 16 whole-body PET/CT image data sets, the authors compare the recognition performance among these 15 strategies on 18 objects from the thorax, abdomen, and pelvis in object localization error and size estimation error. Particularly on texture membership images, object localization is within three voxels on whole-body low-dose CT images and 2 voxels on body-region-wise low-dose images of known true locations. Surprisingly, even on direct body-region-wise PET images, localization error within 3 voxels seems possible.
The previous body-region-wise approach can be extended to whole-body torso with similar object localization performance. Combined use of image texture and intensity property yields the best object localization accuracy. In both body-region-wise and whole-body approaches, recognition performance on low-dose CT images reaches levels previously achieved on diagnostic CT images. The best object recognition strategy varies among objects; the proposed framework however allows employing a strategy that is optimal for each object.
全身正电子发射断层扫描/计算机断层扫描(PET/CT)已成为对患有各种疾病状况(尤其是癌症)的患者进行成像的标准方法。在PET/CT图像中对疾病负担进行全身准确量化对于病变特征描述、疾病分期、预测患者预后、制定治疗计划以及评估疾病对治疗干预的反应至关重要。然而,PET/CT中的全身解剖结构识别是在全身、身体区域和器官层面准确且自动地量化疾病的关键第一步。然而,由于这种成像模式的CT组件中所呈现的解剖信息质量较低以及PET组件中解剖细节的匮乏,后一过程仍然是一项挑战。在本文中,作者展示了一种最近开发的自动解剖识别(AAR)方法[Udupa等人,“医学图像中全身分层模糊建模、解剖结构识别与描绘”,《医学图像分析》18,752 - 771(2014)]在PET/CT图像上的适应性。他们的目标是测试与在诊断CT图像上所达到的相比,在PET/CT上能够实现何种程度的物体定位精度。
作者在这项工作中从三个方面推进了AAR方法:(i)从Udupa等人工作中的身体区域层面处理扩展到全身;(ii)从Udupa等人工作中在最佳物体识别中使用图像强度扩展到强度加物体特定纹理属性,以及(iii)从模态内模型构建 - 识别策略扩展到模态间方法。全身方法允许考虑不同身体区域中物体之间的关系,这在以前是不可能的。对物体纹理的考虑允许将先前基于最佳阈值的模糊模型识别方法从强度图像推广到任何导出的模糊隶属度图像,并且在此过程中,使性能达到在身体区域层面方法中在诊断CT和MR图像上所达到的水平。模态间方法促进了在PET/CT和其他导出图像上使用先前从诊断CT图像创建的现有模糊模型,从而真正将与模态无关的物体组装解剖结构与图像中特定于模态的组织属性描绘区分开来。
结合上述三个基本思想的关键方式使他们得出了15种在PET/CT图像中识别物体的不同策略。利用来自胸部和腹部身体区域的50个诊断CT图像数据集以及16个全身PET/CT图像数据集,作者在物体定位误差和尺寸估计误差方面比较了这15种策略在来自胸部、腹部和骨盆的18个物体上的识别性能。特别是在纹理隶属度图像上,在全身低剂量CT图像上物体定位在已知真实位置的三个体素内,在身体区域层面低剂量图像上在两个体素内。令人惊讶的是,即使在直接的身体区域层面PET图像上,似乎也可能实现三个体素内的定位误差。
先前的身体区域层面方法可以扩展到全身躯干,且具有相似的物体定位性能。图像纹理和强度属性的联合使用产生了最佳的物体定位精度。在身体区域层面和全身方法中,低剂量CT图像上的识别性能都达到了先前在诊断CT图像上所达到的水平。最佳的物体识别策略因物体而异;然而,所提出的框架允许采用针对每个物体最优的策略。