Suppr超能文献

DMFF-Net:一种用于卵巢肿瘤分割的双编码多尺度特征融合网络。

DMFF-Net: A dual encoding multiscale feature fusion network for ovarian tumor segmentation.

机构信息

School of Life Sciences, Tiangong University, Tianjin, China.

School of Control Science and Engineering, Tiangong University, Tianjin, China.

出版信息

Front Public Health. 2023 Jan 11;10:1054177. doi: 10.3389/fpubh.2022.1054177. eCollection 2022.

Abstract

Ovarian cancer is a serious threat to the female reproductive system. Precise segmentation of the tumor area helps the doctors to further diagnose the disease. Automatic segmentation techniques for abstracting high-quality features from images through autonomous learning of model have become a hot research topic nowadays. However, the existing methods still have the problem of poor segmentation of ovarian tumor details. To cope with this problem, a dual encoding based multiscale feature fusion network (DMFF-Net) is proposed for ovarian tumor segmentation. Firstly, a dual encoding method is proposed to extract diverse features. These two encoding paths are composed of residual blocks and single dense aggregation blocks, respectively. Secondly, a multiscale feature fusion block is proposed to generate more advanced features. This block constructs feature fusion between two encoding paths to alleviate the feature loss during deep extraction and further increase the information content of the features. Finally, coordinate attention is added to the decoding stage after the feature concatenation, which enables the decoding stage to capture the valid information accurately. The test results show that the proposed method outperforms existing medical image segmentation algorithms for segmenting lesion details. Moreover, the proposed method also performs well in two other segmentation tasks.

摘要

卵巢癌是女性生殖系统的严重威胁。肿瘤区域的精确分割有助于医生进一步诊断疾病。通过模型自主学习从图像中提取高质量特征的自动分割技术已成为当今热门的研究课题。然而,现有的方法仍然存在卵巢肿瘤细节分割效果差的问题。针对该问题,提出了一种基于双编码的多尺度特征融合网络(DMFF-Net)用于卵巢肿瘤分割。首先,提出了一种双编码方法来提取多样化的特征。这两个编码路径分别由残差块和单密集聚合块组成。其次,提出了一种多尺度特征融合块来生成更高级的特征。该块构建两个编码路径之间的特征融合,以减轻在深度提取过程中的特征损失,并进一步增加特征的信息量。最后,在特征融合后解码阶段添加坐标注意力,使解码阶段能够准确捕获有效信息。实验结果表明,该方法在分割病变细节方面优于现有的医学图像分割算法。此外,该方法在另外两个分割任务中也表现良好。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/17c4/9875002/fd97842d1724/fpubh-10-1054177-g0001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验