Suppr超能文献

使用多模态深度学习网络对PET图像进行无CT全身骨分割

CT-Less Whole-Body Bone Segmentation of PET Images Using a Multimodal Deep Learning Network.

作者信息

Bao Nan, Zhang Jiaxin, Li Zhikun, Wei Shiyu, Zhang Jiazhen, Greenwald Stephen E, Onofrey John A, Lu Yihuan, Xu Lisheng

出版信息

IEEE J Biomed Health Inform. 2025 Feb;29(2):1151-1164. doi: 10.1109/JBHI.2024.3501386. Epub 2025 Feb 10.

Abstract

In bone cancer imaging, positron emission tomography (PET) is ideal for the diagnosis and staging of bone cancers due to its high sensitivity to malignant tumors. The diagnosis of bone cancer requires tumor analysis and localization, where accurate and automated wholebody bone segmentation (WBBS) is often needed. Current WBBS for PET imaging is based on paired Computed Tomography (CT) images. However, mismatches between CT and PET images often occur due to patient motion, which leads to erroneous bone segmentation and thus, to inaccurate tumor analysis. Furthermore, there are some instances where CT images are unavailable for WBBS. In this work, we propose a novel multimodal fusion network (MMF-Net) for WBBS of PET images, without the need for CT images. Specifically, the tracer activity ($\lambda$-MLAA), attenuation map ($\mu$-MLAA), and synthetic attenuation map ($\mu$-DL) images are introduced into the training data. We first design a multi-encoder structure employed to fully learn modalityspecific encoding representations of the three PET modality images through independent encoding branches. Then, we propose a multimodal fusion module in the decoder to further integrate the complementary information across the three modalities. Additionally, we introduce revised convolution units, SE (Squeeze-and-Excitation) Normalization and deep supervision to improve segmentation performance. Extensive comparisons and ablation experiments, using 130 whole-body PET image datasets, show promising results. We conclude that the proposed method can achieve WBBS with moderate to high accuracy using PET information only, which potentially can be used to overcome the current limitations of CT-based approaches, while minimizing exposure to ionizing radiation.

摘要

在骨癌成像中,正电子发射断层扫描(PET)因其对恶性肿瘤具有高敏感性,是骨癌诊断和分期的理想方法。骨癌的诊断需要对肿瘤进行分析和定位,这通常需要准确且自动化的全身骨分割(WBBS)。当前用于PET成像的WBBS基于配对的计算机断层扫描(CT)图像。然而,由于患者运动,CT图像和PET图像之间经常出现不匹配,这会导致错误的骨分割,进而导致肿瘤分析不准确。此外,在某些情况下,WBBS无法获得CT图像。在这项工作中,我们提出了一种用于PET图像WBBS的新型多模态融合网络(MMF-Net),无需CT图像。具体而言,将示踪剂活性(λ-MLAA)、衰减图(μ-MLAA)和合成衰减图(μ-DL)图像引入训练数据。我们首先设计了一种多编码器结构,用于通过独立的编码分支充分学习三种PET模态图像的模态特定编码表示。然后,我们在解码器中提出了一个多模态融合模块,以进一步整合三种模态之间的互补信息。此外,我们引入了改进的卷积单元、SE(挤压与激励)归一化和深度监督来提高分割性能。使用130个全身PET图像数据集进行的广泛比较和消融实验显示出了有前景的结果。我们得出结论,所提出的方法仅使用PET信息就能以中等到高精度实现WBBS,这有可能用于克服当前基于CT方法的局限性,同时将电离辐射暴露降至最低。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验