• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于视网膜光学相干断层扫描(OCT)图像分割的不确定性引导跨层融合网络

Uncertainty-guided cross-level fusion network for retinal OCT image segmentation.

作者信息

Wang Jiaxin, Zhu Weifang, Xiang Dehui, Chen Xinjian, Peng Tao, Peng Qing, Wang Meng, Shi Fei

机构信息

MIPAV Lab, School of Electronics and Information Engineering, Soochow University, Suzhou, China.

The State Key Laboratory of Radiation Medicine and Protection, Soochow University, Suzhou, China.

出版信息

Med Phys. 2025 Sep;52(9):e18102. doi: 10.1002/mp.18102.

DOI:10.1002/mp.18102
PMID:40891145
Abstract

BACKGROUND

Deep learning-based segmentation methods for optical coherence tomography (OCT) have demonstrated outstanding performance. However, the stochastic distribution of training data and the inherent limitations of deep neural networks introduce uncertainty into the segmentation process. Accurately estimating this uncertainty is essential for generating reliable confidence assessments and improving model predictions.

PURPOSE

To address these challenges, we propose a novel uncertainty-guided cross-layer fusion network (UGCFNet) for retinal OCT segmentation. UGCFNet integrates uncertainty quantification into the training process of deep neural networks and leverages this uncertainty to enhance segmentation accuracy.

METHODS

Our model employs an encoder-decoder architecture that quantitatively assesses uncertainty at multiple stages, directing the network's focus toward regions with higher uncertainty. By facilitating cross-layer feature fusion, UGCFNet enhances the comprehensive understanding of both semantic information and morphological details. Additionally, we incorporate an improved Bayesian neural network loss function alongside an uncertainty-aware loss function, enabling the network to effectively utilize these mechanisms for better uncertainty modeling.

RESULTS

We conducted extensive experiments on the publicly available AI-Challenger and OIMHS OCT segmentation datasets. The training, validation, and testing sets of the AI-Challenger dataset are comprised of 32, 8, and 43 OCT volumes, yielding a total of 4096, 1024, and 5504 B-scans, respectively. The training, validation, and testing sets of the OIMHS dataset consist of 100, 25, and 25 OCT volumes, resulting in 2,310, 798, and 751 B-scans, respectively. The results demonstrate that UGCFNet achieves state-of-the-art performance, with average Dice similarity coefficients of 79.47% and 93.22% on the respective datasets.

CONCLUSION

Our proposed UGCFNet significantly advances retinal OCT segmentation by integrating uncertainty guidance and cross-level feature fusion, offering more reliable and accurate segmentation outcomes.

摘要

背景

基于深度学习的光学相干断层扫描(OCT)分割方法已展现出卓越性能。然而,训练数据的随机分布以及深度神经网络的固有局限性给分割过程带来了不确定性。准确估计这种不确定性对于生成可靠的置信度评估和改进模型预测至关重要。

目的

为应对这些挑战,我们提出了一种用于视网膜OCT分割的新型不确定性引导跨层融合网络(UGCFNet)。UGCFNet将不确定性量化集成到深度神经网络的训练过程中,并利用这种不确定性来提高分割精度。

方法

我们的模型采用编码器 - 解码器架构,在多个阶段对不确定性进行定量评估,引导网络将注意力集中在不确定性较高的区域。通过促进跨层特征融合,UGCFNet增强了对语义信息和形态细节的全面理解。此外,我们结合了改进的贝叶斯神经网络损失函数和不确定性感知损失函数,使网络能够有效利用这些机制进行更好的不确定性建模。

结果

我们在公开可用的AI - Challenger和OIMHS OCT分割数据集上进行了广泛实验。AI - Challenger数据集的训练集、验证集和测试集分别由32、8和43个OCT体积组成,分别产生总共4096、1024和5504个B扫描。OIMHS数据集的训练集、验证集和测试集分别由100、25和25个OCT体积组成,分别产生2310、798和751个B扫描。结果表明,UGCFNet实现了领先的性能,在相应数据集上的平均骰子相似系数分别为79.47%和93.22%。

结论

我们提出的UGCFNet通过集成不确定性引导和跨层特征融合,显著推进了视网膜OCT分割,提供了更可靠、准确的分割结果。

相似文献

1
Uncertainty-guided cross-level fusion network for retinal OCT image segmentation.用于视网膜光学相干断层扫描(OCT)图像分割的不确定性引导跨层融合网络
Med Phys. 2025 Sep;52(9):e18102. doi: 10.1002/mp.18102.
2
Automated segmentation of hyperreflective foci in OCT images for diabetic retinopathy using deep convolutional networks.使用深度卷积网络对糖尿病视网膜病变的光学相干断层扫描(OCT)图像中的高反射灶进行自动分割。
Appl Opt. 2025 Apr 20;64(12):3180-3192. doi: 10.1364/AO.547758.
3
Joint segmentation of retinal layers and fluid lesions in optical coherence tomography with cross-dataset learning.基于跨数据集学习的光学相干断层扫描中视网膜层和液体病变的联合分割
Artif Intell Med. 2025 Apr;162:103096. doi: 10.1016/j.artmed.2025.103096. Epub 2025 Feb 21.
4
Point-cloud segmentation with in-silico data augmentation for prostate cancer treatment.用于前列腺癌治疗的基于计算机模拟数据增强的点云分割
Med Phys. 2025 Apr 3. doi: 10.1002/mp.17815.
5
Identifying Retinal Features Using a Self-Configuring CNN for Clinical Intervention.使用自配置卷积神经网络识别视网膜特征以进行临床干预。
Invest Ophthalmol Vis Sci. 2025 Jun 2;66(6):55. doi: 10.1167/iovs.66.6.55.
6
Novel Deep Learning Model for Glaucoma Detection Using Fusion of Fundus and Optical Coherence Tomography Images.基于眼底图像与光学相干断层扫描图像融合的青光眼检测新型深度学习模型
Sensors (Basel). 2025 Jul 11;25(14):4337. doi: 10.3390/s25144337.
7
Structural semantic-guided MR synthesis from PET images via a dual cross-attention mechanism.通过双交叉注意力机制从PET图像进行结构语义引导的MR合成。
Med Phys. 2025 Jul;52(7):e17957. doi: 10.1002/mp.17957.
8
Fluid-SegNet: Multi-dimensional loss-driven Y-Net with dilated convolutions for OCT B-scan fluid segmentation.流体分割网络(Fluid-SegNet):用于光学相干断层扫描(OCT)B 扫描流体分割的具有扩张卷积的多维损失驱动 Y 网络。
Comput Med Imaging Graph. 2025 Sep;124:102613. doi: 10.1016/j.compmedimag.2025.102613. Epub 2025 Jul 31.
9
Diffusion semantic segmentation model: A generative model for medical image segmentation based on joint distribution.扩散语义分割模型:一种基于联合分布的医学图像分割生成模型。
Med Phys. 2025 Jul;52(7):e17928. doi: 10.1002/mp.17928. Epub 2025 Jun 8.
10
TLTNet: A novel transscale cascade layered transformer network for enhanced retinal blood vessel segmentation.TLTNet:一种新颖的跨尺度级联分层Transformer 网络,用于增强视网膜血管分割。
Comput Biol Med. 2024 Aug;178:108773. doi: 10.1016/j.compbiomed.2024.108773. Epub 2024 Jun 25.