Suppr超能文献

基于协同学习特征融合与Transformer的乳腺癌病灶分割

[Breast cancer lesion segmentation based on co-learning feature fusion and Transformer].

作者信息

Zhai Yuesong, Chen Zhili, Shao Dan

机构信息

School of Computer Science and Engineering, Shenyang Jianzhu University, Shenyang 110168, P. R. China.

Department of Nuclear Medicine, Guangdong Academy of Medical Sciences, Guangdong Provincial People's Hospital, Guangzhou 519041, P. R. China.

出版信息

Sheng Wu Yi Xue Gong Cheng Xue Za Zhi. 2024 Apr 25;41(2):237-245. doi: 10.7507/1001-5515.202306063.

Abstract

The PET/CT imaging technology combining positron emission tomography (PET) and computed tomography (CT) is the most advanced imaging examination method currently, and is mainly used for tumor screening, differential diagnosis of benign and malignant tumors, staging and grading. This paper proposes a method for breast cancer lesion segmentation based on PET/CT bimodal images, and designs a dual-path U-Net framework, which mainly includes three modules: encoder module, feature fusion module and decoder module. Among them, the encoder module uses traditional convolution for feature extraction of single mode image; The feature fusion module adopts collaborative learning feature fusion technology and uses Transformer to extract the global features of the fusion image; The decoder module mainly uses multi-layer perceptron to achieve lesion segmentation. This experiment uses actual clinical PET/CT data to evaluate the effectiveness of the algorithm. The experimental results show that the accuracy, recall and accuracy of breast cancer lesion segmentation are 95.67%, 97.58% and 96.16%, respectively, which are better than the baseline algorithm. Therefore, it proves the rationality of the single and bimodal feature extraction method combining convolution and Transformer in the experimental design of this article, and provides reference for feature extraction methods for tasks such as multimodal medical image segmentation or classification.

摘要

将正电子发射断层扫描(PET)与计算机断层扫描(CT)相结合的PET/CT成像技术是目前最先进的成像检查方法,主要用于肿瘤筛查、良恶性肿瘤的鉴别诊断、分期及分级。本文提出了一种基于PET/CT双峰图像的乳腺癌病灶分割方法,并设计了一种双路径U-Net框架,该框架主要包括三个模块:编码器模块、特征融合模块和解码器模块。其中,编码器模块使用传统卷积对单模态图像进行特征提取;特征融合模块采用协同学习特征融合技术,并使用Transformer提取融合图像的全局特征;解码器模块主要使用多层感知器实现病灶分割。本实验使用实际临床PET/CT数据评估算法的有效性。实验结果表明,乳腺癌病灶分割的准确率、召回率和精确率分别为95.67%、97.58%和96.16%,优于基线算法。因此,证明了本文实验设计中结合卷积和Transformer的单模态和双峰特征提取方法的合理性,为多模态医学图像分割或分类等任务的特征提取方法提供了参考。

相似文献

本文引用的文献

7
Fully Dense UNet for 2-D Sparse Photoacoustic Tomography Artifact Removal.二维稀疏光声断层成像伪影去除的全密集 UNet。
IEEE J Biomed Health Inform. 2020 Feb;24(2):568-576. doi: 10.1109/JBHI.2019.2912935. Epub 2019 Apr 23.
10

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验