Suppr超能文献

基于可学习深度特征的关键点对应关系并通过迭代训练引导的无监督非刚性组织学图像配准

Unsupervised Non-Rigid Histological Image Registration Guided by Keypoint Correspondences Based on Learnable Deep Features With Iterative Training.

作者信息

Wei Xingyue, Ge Lin, Huang Lijie, Luo Jianwen, Xu Yan

出版信息

IEEE Trans Med Imaging. 2025 Jan;44(1):447-461. doi: 10.1109/TMI.2024.3447214. Epub 2025 Jan 2.

Abstract

Histological image registration is a fundamental task in histological image analysis. It is challenging because of substantial appearance differences due to multiple staining. Keypoint correspondences, i.e., matched keypoint pairs, have been introduced to guide unsupervised deep learning (DL) based registration methods to handle such a registration task. This paper proposes an iterative keypoint correspondence-guided (IKCG) unsupervised network for non-rigid histological image registration. Fixed deep features and learnable deep features are introduced as keypoint descriptors to automatically establish keypoint correspondences, the distance between which is used as a loss function to train the registration network. Fixed deep features extracted from DL networks that are pre-trained on natural image datasets are more discriminative than handcrafted ones, benefiting from the deep and hierarchical nature of DL networks. The intermediate layer outputs of the registration networks trained on histological image datasets are extracted as learnable deep features, which reveal unique information for histological images. An iterative training strategy is adopted to train the registration network and optimize learnable deep features jointly. Benefiting from the excellent matching ability of learnable deep features optimized with the iterative training strategy, the proposed method can solve the local non-rigid large displacement problem, an inevitable problem usually caused by misoperation, such as tears in producing tissue slices. The proposed method is evaluated on the Automatic Non-rigid Histology Image Registration (ANHIR) website and AutomatiC Registration Of Breast cAncer Tissue (ACROBAT) website. It ranked 1st on both websites as of August 6th, 2024.

摘要

组织学图像配准是组织学图像分析中的一项基本任务。由于多重染色导致的显著外观差异,该任务具有挑战性。关键点对应关系,即匹配的关键点对,已被引入以指导基于无监督深度学习(DL)的配准方法来处理此类配准任务。本文提出了一种用于非刚性组织学图像配准的迭代关键点对应引导(IKCG)无监督网络。引入固定深度特征和可学习深度特征作为关键点描述符,以自动建立关键点对应关系,其距离用作损失函数来训练配准网络。从在自然图像数据集上预训练的DL网络中提取的固定深度特征比手工制作的特征更具判别力,这得益于DL网络的深度和层次性质。将在组织学图像数据集上训练的配准网络的中间层输出提取为可学习深度特征,这些特征揭示了组织学图像的独特信息。采用迭代训练策略来训练配准网络并联合优化可学习深度特征。得益于通过迭代训练策略优化的可学习深度特征的出色匹配能力,所提出的方法可以解决局部非刚性大位移问题,这是通常由误操作(如制作组织切片时的撕裂)引起的一个不可避免的问题。所提出的方法在自动非刚性组织学图像配准(ANHIR)网站和乳腺癌组织自动配准(ACROBAT)网站上进行了评估。截至2024年8月6日,它在两个网站上均排名第一。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验