Suppr超能文献

用于多模态PET-CT肿瘤分割的循环特征融合学习

Recurrent feature fusion learning for multi-modality pet-ct tumor segmentation.

作者信息

Bi Lei, Fulham Michael, Li Nan, Liu Qiufang, Song Shaoli, Dagan Feng David, Kim Jinman

机构信息

School of Computer Science, University of Sydney, NSW, Australia; Australian Research Council Training Centre for Innovative Bioengineering, NSW, Australia.

School of Computer Science, University of Sydney, NSW, Australia; Australian Research Council Training Centre for Innovative Bioengineering, NSW, Australia; Department of Molecular Imaging, Royal Prince Alfred Hospital, NSW, Australia.

出版信息

Comput Methods Programs Biomed. 2021 May;203:106043. doi: 10.1016/j.cmpb.2021.106043. Epub 2021 Mar 11.

Abstract

BACKGROUND AND OBJECTIVE

[18f]-fluorodeoxyglucose (fdg) positron emission tomography - computed tomography (pet-ct) is now the preferred imaging modality for staging many cancers. Pet images characterize tumoral glucose metabolism while ct depicts the complementary anatomical localization of the tumor. Automatic tumor segmentation is an important step in image analysis in computer aided diagnosis systems. Recently, fully convolutional networks (fcns), with their ability to leverage annotated datasets and extract image feature representations, have become the state-of-the-art in tumor segmentation. There are limited fcn based methods that support multi-modality images and current methods have primarily focused on the fusion of multi-modality image features at various stages, i.e., early-fusion where the multi-modality image features are fused prior to fcn, late-fusion with the resultant features fused and hyper-fusion where multi-modality image features are fused across multiple image feature scales. Early- and late-fusion methods, however, have inherent, limited freedom to fuse complementary multi-modality image features. The hyper-fusion methods learn different image features across different image feature scales that can result in inaccurate segmentations, in particular, in situations where the tumors have heterogeneous textures.

METHODS

we propose a recurrent fusion network (rfn), which consists of multiple recurrent fusion phases to progressively fuse the complementary multi-modality image features with intermediary segmentation results derived at individual recurrent fusion phases: (1) the recurrent fusion phases iteratively learn the image features and then refine the subsequent segmentation results; and, (2) the intermediary segmentation results allows our method to focus on learning the multi-modality image features around these intermediary segmentation results, which minimize the risk of inconsistent feature learning.

RESULTS

we evaluated our method on two pathologically proven non-small cell lung cancer pet-ct datasets. We compared our method to the commonly used fusion methods (early-fusion, late-fusion and hyper-fusion) and the state-of-the-art pet-ct tumor segmentation methods on various network backbones (resnet, densenet and 3d-unet). Our results show that the rfn provides more accurate segmentation compared to the existing methods and is generalizable to different datasets.

CONCLUSIONS

we show that learning through multiple recurrent fusion phases allows the iterative re-use of multi-modality image features that refines tumor segmentation results. We also identify that our rfn produces consistent segmentation results across different network architectures.

摘要

背景与目的

[18F]-氟脱氧葡萄糖(FDG)正电子发射断层扫描 - 计算机断层扫描(PET-CT)现在是许多癌症分期的首选成像方式。PET图像表征肿瘤的葡萄糖代谢,而CT描绘肿瘤的互补解剖定位。自动肿瘤分割是计算机辅助诊断系统中图像分析的重要步骤。最近,全卷积网络(FCN)凭借其利用带注释数据集和提取图像特征表示的能力,已成为肿瘤分割的最新技术。基于FCN的支持多模态图像的方法有限,并且当前方法主要集中在各个阶段融合多模态图像特征,即早期融合(在FCN之前融合多模态图像特征)、后期融合(融合所得特征)和超融合(在多个图像特征尺度上融合多模态图像特征)。然而,早期和后期融合方法在融合互补多模态图像特征方面具有固有的、有限的自由度。超融合方法在不同图像特征尺度上学习不同的图像特征,这可能导致分割不准确,特别是在肿瘤具有异质纹理的情况下。

方法

我们提出了一种循环融合网络(RFN),它由多个循环融合阶段组成,以逐步融合互补的多模态图像特征以及在各个循环融合阶段得出的中间分割结果:(1)循环融合阶段迭代学习图像特征,然后细化后续分割结果;(2)中间分割结果使我们的方法能够专注于学习这些中间分割结果周围的多模态图像特征,从而将特征学习不一致的风险降至最低。

结果

我们在两个经病理证实的非小细胞肺癌PET-CT数据集上评估了我们的方法。我们将我们的方法与常用的融合方法(早期融合、后期融合和超融合)以及各种网络主干(ResNet、DenseNet和3D-Unet)上的最新PET-CT肿瘤分割方法进行了比较。我们的结果表明,与现有方法相比,RFN提供了更准确的分割,并且可以推广到不同的数据集。

结论

我们表明,通过多个循环融合阶段进行学习允许迭代地重新使用多模态图像特征,从而细化肿瘤分割结果。我们还发现我们的RFN在不同的网络架构上产生一致的分割结果。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验