Suppr超能文献

用于胃癌图像病变分割的双分支混合网络。

Dual-branch hybrid network for lesion segmentation in gastric cancer images.

机构信息

Faculty of Information Technology, Beijing University of Technology, Beijing, China.

Department of Gastroenterology, The Second Medical Center and National Clinical Research Center for Geriatric Diseases, Chinese PLA, Beijing, China.

出版信息

Sci Rep. 2023 Apr 19;13(1):6377. doi: 10.1038/s41598-023-33462-y.

Abstract

The effective segmentation of the lesion region in gastric cancer images can assist physicians in diagnosing and reducing the probability of misdiagnosis. The U-Net has been proven to provide segmentation results comparable to specialists in medical image segmentation because of its ability to extract high-level semantic information. However, it has limitations in obtaining global contextual information. On the other hand, the Transformer excels at modeling explicit long-range relations but cannot capture low-level detail information. Hence, this paper proposes a Dual-Branch Hybrid Network based on the fusion Transformer and U-Net to overcome both limitations. We propose the Deep Feature Aggregation Decoder (DFA) by aggregating only the in-depth features to obtain salient lesion features for both branches and reduce the complexity of the model. Besides, we design a Feature Fusion (FF) module utilizing the multi-modal fusion mechanisms to interact with independent features of various modalities and the linear Hadamard product to fuse the feature information extracted from both branches. Finally, the Transformer loss, the U-Net loss, and the fused loss are compared to the ground truth label for joint training. Experimental results show that our proposed method has an IOU of 81.3%, a Dice coefficient of 89.5%, and an Accuracy of 94.0%. These metrics demonstrate that our model outperforms the existing models in obtaining high-quality segmentation results, which has excellent potential for clinical analysis and diagnosis. The code and implementation details are available at Github, https://github.com/ZYY01/DBH-Net/ .

摘要

胃癌图像中病变区域的有效分割可以帮助医生进行诊断,降低误诊的概率。U-Net 由于能够提取高层语义信息,已被证明在医学图像分割方面可以提供与专家相当的分割结果。但是,它在获取全局上下文信息方面存在局限性。另一方面,Transformer 擅长建模显式的长程关系,但无法捕获低层次的细节信息。因此,本文提出了一种基于融合 Transformer 和 U-Net 的双分支混合网络来克服这两个局限性。我们通过仅聚合深度特征来提出深度特征聚合解码器(DFA),以获得两个分支的显著病变特征,并降低模型的复杂度。此外,我们设计了一个特征融合(FF)模块,利用多模态融合机制与各个模态的独立特征相互作用,并利用线性 Hadamard 积融合从两个分支提取的特征信息。最后,将 Transformer 损失、U-Net 损失和融合损失与地面真实标签进行联合训练。实验结果表明,我们提出的方法在获取高质量分割结果方面具有 81.3%的 IOU、89.5%的 Dice 系数和 94.0%的准确率。这些指标表明,我们的模型在获取高质量分割结果方面优于现有模型,具有出色的临床分析和诊断潜力。代码和实现细节可在 Github 上获得,网址为 https://github.com/ZYY01/DBH-Net/

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1095/10115814/cf7169809272/41598_2023_33462_Fig1_HTML.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验