Suppr超能文献

GL-Segnet:用于医学图像分割的全局-局部表征学习网络。

GL-Segnet: Global-Local representation learning net for medical image segmentation.

作者信息

Gai Di, Zhang Jiqian, Xiao Yusong, Min Weidong, Chen Hui, Wang Qi, Su Pengxiang, Huang Zheng

机构信息

School of Mathematics and Computer Sciences, Nanchang University, Nanchang, China.

Jiangxi Key Laboratory of Smart City, Nanchang, China.

出版信息

Front Neurosci. 2023 Apr 3;17:1153356. doi: 10.3389/fnins.2023.1153356. eCollection 2023.

Abstract

Medical image segmentation has long been a compelling and fundamental problem in the realm of neuroscience. This is an extremely challenging task due to the intensely interfering irrelevant background information to segment the target. State-of-the-art methods fail to consider simultaneously addressing both long-range and short-range dependencies, and commonly emphasize the semantic information characterization capability while ignoring the geometric detail information implied in the shallow feature maps resulting in the dropping of crucial features. To tackle the above problem, we propose a Global-Local representation learning net for medical image segmentation, namely GL-Segnet. In the Feature encoder, we utilize the Multi-Scale Convolution (MSC) and Multi-Scale Pooling (MSP) modules to encode the global semantic representation information at the shallow level of the network, and multi-scale feature fusion operations are applied to enrich local geometric detail information in a cross-level manner. Beyond that, we adopt a global semantic feature extraction module to perform filtering of irrelevant background information. In Attention-enhancing Decoder, we use the Attention-based feature decoding module to refine the multi-scale fused feature information, which provides effective cues for attention decoding. We exploit the structural similarity between images and the edge gradient information to propose a hybrid loss to improve the segmentation accuracy of the model. Extensive experiments on medical image segmentation from Glas, ISIC, Brain Tumors and SIIM-ACR demonstrated that our GL-Segnet is superior to existing state-of-art methods in subjective visual performance and objective evaluation.

摘要

医学图像分割长期以来一直是神经科学领域中一个引人关注且基础的问题。由于在分割目标时存在强烈干扰的无关背景信息,这是一项极具挑战性的任务。当前的先进方法未能同时考虑解决长距离和短距离依赖关系,并且通常强调语义信息表征能力,而忽略了浅层特征图中隐含的几何细节信息,从而导致关键特征的丢失。为了解决上述问题,我们提出了一种用于医学图像分割的全局-局部表征学习网络,即GL-Segnet。在特征编码器中,我们利用多尺度卷积(MSC)和多尺度池化(MSP)模块在网络的浅层对全局语义表征信息进行编码,并应用多尺度特征融合操作以跨层级方式丰富局部几何细节信息。除此之外,我们采用一个全局语义特征提取模块来对无关背景信息进行过滤。在注意力增强解码器中,我们使用基于注意力的特征解码模块来细化多尺度融合特征信息,这为注意力解码提供了有效的线索。我们利用图像之间的结构相似性和边缘梯度信息来提出一种混合损失,以提高模型的分割精度。在来自格拉斯、国际皮肤影像协作组、脑肿瘤和SIIM-ACR的医学图像分割上进行的大量实验表明,我们的GL-Segnet在主观视觉性能和客观评估方面优于现有的先进方法。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bc6d/10106565/3eeea17d62cf/fnins-17-1153356-g0001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验