Trullo Roger, Petitjean Caroline, Nie Dong, Shen Dinggang, Ruan Su
Normandie Univ., UNIROUEN, UNIHAVRE, INSA Rouen, LITIS, 76000 Rouen, France.
Department of Radiology and BRIC, UNC-Chapel Hill, Chapel Hill, USA.
Deep Learn Med Image Anal Multimodal Learn Clin Decis Support (2017). 2017 Sep;10553:21-29. doi: 10.1007/978-3-319-67558-9_3. Epub 2017 Sep 9.
Computed Tomography (CT) is the standard imaging technique for radiotherapy planning. The delineation of Organs at Risk (OAR) in thoracic CT images is a necessary step before radiotherapy, for preventing irradiation of healthy organs. However, due to low contrast, multi-organ segmentation is a challenge. In this paper, we focus on developing a novel framework for automatic delineation of OARs. Different from previous works in OAR segmentation where each organ is segmented separately, we propose two collaborative deep architectures to jointly segment all organs, including esophagus, heart, aorta and trachea. Since most of the organ borders are ill-defined, we believe spatial relationships must be taken into account to overcome the lack of contrast. The aim of combining two networks is to learn anatomical constraints with the first network, which will be used in the second network, when each OAR is segmented in turn. Specifically, we use the first deep architecture, a deep SharpMask architecture, for providing an effective combination of low-level representations with deep high-level features, and then take into account the spatial relationships between organs by the use of Conditional Random Fields (CRF). Next, the second deep architecture is employed to refine the segmentation of each organ by using the maps obtained on the first deep architecture to learn anatomical constraints for guiding and refining the segmentations. Experimental results show superior performance on 30 CT scans, comparing with other state-of-the-art methods.
计算机断层扫描(CT)是放射治疗计划的标准成像技术。在胸部CT图像中勾画危及器官(OAR)是放疗前的必要步骤,以防止对健康器官的照射。然而,由于对比度低,多器官分割是一项挑战。在本文中,我们专注于开发一种用于自动勾画OAR的新颖框架。与之前在OAR分割中每个器官单独分割的工作不同,我们提出了两种协作深度架构来联合分割所有器官,包括食管、心脏、主动脉和气管。由于大多数器官边界不清晰,我们认为必须考虑空间关系以克服对比度不足的问题。结合两个网络的目的是在依次分割每个OAR时,利用第一个网络学习解剖学约束,并将其应用于第二个网络。具体而言,我们使用第一个深度架构,即深度SharpMask架构,来提供低级表示与深度高级特征的有效组合,然后通过使用条件随机场(CRF)考虑器官之间的空间关系。接下来,使用第二个深度架构,通过利用在第一个深度架构上获得的映射来学习解剖学约束,以指导和细化分割,从而细化每个器官的分割。实验结果表明,与其他现有最先进方法相比,在30次CT扫描上具有卓越的性能。