Suppr超能文献

基于D-STUNet和渐进式关键点筛选策略的无监督视网膜图像配准

Unsupervised retinal image registration based on D-STUNet and progressive keypoint screening strategy.

作者信息

Deng Xiangyu, Kang Jiayi

机构信息

Institute of Information Technology, Northwest Normal University, Lanzhou 730070, People's Republic of China.

College of Physics and Electronic Engineering, Northwest Normal University, Lanzhou 730070, People's Republic of China.

出版信息

Biomed Phys Eng Express. 2025 Jul 9;11(4). doi: 10.1088/2057-1976/ade9c6.

Abstract

UNLABELLED

. Retinal image registration improves the accuracy and validity of a doctor's diagnosis and holds a crucial role in the monitoring and treatment of associated diseases. However, most existing image registration methods have limitations in identifying retinal vascular features, making it difficult to achieve desirable results in retinal image registration tasks. To solve this problem, a fusion network of Swin Transformer and U-Net, improved by Differential Multi-scale Convolutional Block Attention Module with Residual Mechanism (DMCR), named D-STUNet, is proposed in conjunction with the designed Progressive Keypoint Screening (PKS) strategy.

APPROACH

The D-STUNet network is primarily based on an encoder-decoder framework, and employs DMCR for the improvement and fusion of the Swin Transformer and U-Net networks. Among them, the DMCR module enhances the ability to focus on retinal vascular features, which effectively improves the accuracy of retinal image registration in the event of limited data. Simultaneously, the network introduces the PKS strategy to enable the gradual accumulation of effective keypoint information in the course of the training, which ensures that the keypoints are more concentrated in the retinal vascular region, thus enhancing the matching rate and overall detection effect.

MAIN RESULTS

The registration validation is conducted on the publicly accessible dataset Fundus Image Registration Dataset (FIRE) and compare it with nine algorithms. The experimental results show that the algorithm achieves an acceptance rate of 98.50%, a failure rate of 0, and an inaccuracy rate of 1.50%. In the area under the curve (AUC) metric, AUC for the Easy group is 0.929, while the AUC for the Mod and Hard groups are 0.883 and 0.724, respectively. The mean area under the curve (mAUC) across all comparison algorithms is the highest, outperforming the second-best algorithm by 0.09. Although it did not reach the optimum in certain subcategories (such as AUC-easy), its overall performance is significantly superior to existing methods.

SIGNIFICANCE

The proposed network is able to effectively capture local features such as complex vascular structures in retinal images, providing a new method to improve the registration accuracy of retinal images.

摘要

未标注

视网膜图像配准提高了医生诊断的准确性和有效性,在相关疾病的监测和治疗中起着关键作用。然而,大多数现有的图像配准方法在识别视网膜血管特征方面存在局限性,使得在视网膜图像配准任务中难以取得理想的结果。为了解决这个问题,结合设计的渐进关键点筛选(PKS)策略,提出了一种由具有残差机制的差分多尺度卷积块注意力模块(DMCR)改进的Swin Transformer和U-Net融合网络,即D-STUNet。

方法

D-STUNet网络主要基于编码器-解码器框架,并采用DMCR对Swin Transformer和U-Net网络进行改进和融合。其中,DMCR模块增强了聚焦视网膜血管特征的能力,在数据有限的情况下有效地提高了视网膜图像配准的准确性。同时,该网络引入PKS策略,使有效关键点信息在训练过程中逐步积累,确保关键点更集中在视网膜血管区域,从而提高匹配率和整体检测效果。

主要结果

在公开可用的数据集眼底图像配准数据集(FIRE)上进行配准验证,并与九种算法进行比较。实验结果表明,该算法的接受率为98.50%,失败率为0,错误率为1.50%。在曲线下面积(AUC)指标中,易分组的AUC为0.929,而中等难度组和困难组的AUC分别为0.883和0.724。所有比较算法的平均曲线下面积(mAUC)最高,比第二好的算法高出0.09。虽然在某些子类别(如AUC-易)中未达到最优,但整体性能明显优于现有方法。

意义

所提出的网络能够有效捕捉视网膜图像中复杂血管结构等局部特征,为提高视网膜图像配准精度提供了一种新方法。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验