• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于多尺度特征卷积提取的密集连接 U-Net 视网膜血管分割算法。

Densely connected U-Net retinal vessel segmentation algorithm based on multi-scale feature convolution extraction.

机构信息

School of Electronic and Information Engineering, University of Science and Technology Liaoning, Anshan, 114051, China.

School of Biological Science and Medical Engineering, Southeast University, Nanjing, Jiangsu, 210000, China.

出版信息

Med Phys. 2021 Jul;48(7):3827-3841. doi: 10.1002/mp.14944. Epub 2021 Jun 16.

DOI:10.1002/mp.14944
PMID:34028030
Abstract

PURPOSE

The segmentation results of retinal blood vessels have a significant impact on the automatic diagnosis of various ophthalmic diseases. In order to further improve the segmentation accuracy of retinal vessels, we propose an improved algorithm based on multiscale vessel detection, which extracts features through densely connected networks and reuses features.

METHODS

A parallel fusion and serial embedding multiscale feature dense connection U-Net structure are designed. In the parallel fusion method, features of the input images are extracted for Inception multiscale convolution and dense block convolution, respectively, and then the features are fused and input into the subsequent network. In serial embedding mode, the Inception multiscale convolution structure is embedded in the dense connection network module, and then the dense connection structure is used to replace the classical convolution block in the U-Net network encoder part, so as to achieve multiscale feature extraction and efficient utilization of complex structure vessels and thereby improve the network segmentation performance.

RESULTS

The experimental analysis on the standard DRIVE and CHASE_DB1 databases shows that the sensitivity, specificity, accuracy, and AUC of the parallel fusion and serial embedding methods reach 0.7854, 0.9813, 0.9563, 0.9794; 0.7876, 0.9811, 0.9565, 0.9793 and 0.8110, 0.9737, 0.9547, 0.9667; 0.8113, 0.9717, 0.9574, 0.9750, respectively.

CONCLUSIONS

The experimental results show that multiscale feature detection and feature dense connection can effectively enhance the network model's ability to detect blood vessels and improve the network segmentation performance, which is superior to U-Net algorithm and some mainstream retinal blood vessel segmentation algorithms at present.

摘要

目的

视网膜血管的分割结果对各种眼科疾病的自动诊断有重要影响。为了进一步提高视网膜血管的分割精度,我们提出了一种基于多尺度血管检测的改进算法,该算法通过密集连接网络提取特征并重用特征。

方法

设计了并行融合和串行嵌入多尺度特征密集连接 U-Net 结构。在并行融合方法中,分别对输入图像的特征进行 Inception 多尺度卷积和密集块卷积提取,然后融合特征并输入到后续网络中。在串行嵌入模式下,将 Inception 多尺度卷积结构嵌入到密集连接网络模块中,然后使用密集连接结构替换 U-Net 网络编码器部分中的经典卷积块,从而实现多尺度特征提取和复杂结构血管的高效利用,从而提高网络分割性能。

结果

在标准 DRIVE 和 CHASE_DB1 数据库上的实验分析表明,并行融合和串行嵌入方法的灵敏度、特异性、准确性和 AUC 分别达到 0.7854、0.9813、0.9563、0.9794;0.7876、0.9811、0.9565、0.9793 和 0.8110、0.9737、0.9547、0.9667;0.8113、0.9717、0.9574、0.9750。

结论

实验结果表明,多尺度特征检测和特征密集连接可以有效增强网络模型检测血管的能力,提高网络分割性能,优于 U-Net 算法和目前一些主流的视网膜血管分割算法。

相似文献

1
Densely connected U-Net retinal vessel segmentation algorithm based on multi-scale feature convolution extraction.基于多尺度特征卷积提取的密集连接 U-Net 视网膜血管分割算法。
Med Phys. 2021 Jul;48(7):3827-3841. doi: 10.1002/mp.14944. Epub 2021 Jun 16.
2
SegR-Net: A deep learning framework with multi-scale feature fusion for robust retinal vessel segmentation.SegR-Net:一种具有多尺度特征融合的深度学习框架,用于稳健的视网膜血管分割。
Comput Biol Med. 2023 Sep;163:107132. doi: 10.1016/j.compbiomed.2023.107132. Epub 2023 Jun 10.
3
UNet retinal blood vessel segmentation algorithm based on improved pyramid pooling method and attention mechanism.基于改进金字塔池化方法和注意力机制的 UNet 视网膜血管分割算法。
Phys Med Biol. 2021 Aug 26;66(17). doi: 10.1088/1361-6560/ac1c4c.
4
Multiscale U-Net with Spatial Positional Attention for Retinal Vessel Segmentation.多尺度 U-Net 结合空间位置注意力的视网膜血管分割。
J Healthc Eng. 2022 Jan 10;2022:5188362. doi: 10.1155/2022/5188362. eCollection 2022.
5
Retinal blood vessel segmentation based on Densely Connected U-Net.基于密集连接U型网络的视网膜血管分割
Math Biosci Eng. 2020 Apr 15;17(4):3088-3108. doi: 10.3934/mbe.2020175.
6
SFA-Net: Scale and Feature Aggregate Network for Retinal Vessel Segmentation.SFA-Net:用于视网膜血管分割的尺度和特征聚合网络。
J Healthc Eng. 2022 Oct 21;2022:4695136. doi: 10.1155/2022/4695136. eCollection 2022.
7
Segmentation of retinal vessels in fundus images based on U-Net with self-calibrated convolutions and spatial attention modules.基于带有自校准卷积和空间注意力模块的 U-Net 的眼底图像血管分割。
Med Biol Eng Comput. 2023 Jul;61(7):1745-1755. doi: 10.1007/s11517-023-02806-1. Epub 2023 Mar 10.
8
TDCAU-Net: retinal vessel segmentation using transformer dilated convolutional attention-based U-Net method.TDCAU-Net:基于 Transformer 扩张卷积注意力的 U-Net 方法进行视网膜血管分割。
Phys Med Biol. 2023 Dec 22;69(1). doi: 10.1088/1361-6560/ad1273.
9
PCAT-UNet: UNet-like network fused convolution and transformer for retinal vessel segmentation.PCAT-UNet:融合卷积和变形注意力的 U 型网络用于视网膜血管分割。
PLoS One. 2022 Jan 24;17(1):e0262689. doi: 10.1371/journal.pone.0262689. eCollection 2022.
10
MFI-Net: Multiscale Feature Interaction Network for Retinal Vessel Segmentation.MFI-Net:用于视网膜血管分割的多尺度特征交互网络。
IEEE J Biomed Health Inform. 2022 Sep;26(9):4551-4562. doi: 10.1109/JBHI.2022.3182471. Epub 2022 Sep 9.

引用本文的文献

1
VESCL: an open source 2D vessel contouring library.VESCL:一个开源的 2D 血管轮廓绘制库。
Int J Comput Assist Radiol Surg. 2024 Aug;19(8):1627-1636. doi: 10.1007/s11548-024-03212-0. Epub 2024 Jun 16.
2
BiDFDC-Net: a dense connection network based on bi-directional feedback for skin image segmentation.BiDFDC-Net:一种基于双向反馈的密集连接网络用于皮肤图像分割。
Front Physiol. 2023 Jun 20;14:1173108. doi: 10.3389/fphys.2023.1173108. eCollection 2023.