Suppr超能文献

多流形深度判别跨模态哈希医学图像检索。

Multi-Manifold Deep Discriminative Cross-Modal Hashing for Medical Image Retrieval.

出版信息

IEEE Trans Image Process. 2022;31:3371-3385. doi: 10.1109/TIP.2022.3171081. Epub 2022 May 9.

Abstract

Benefitting from the low storage cost and high retrieval efficiency, hash learning has become a widely used retrieval technology to approximate nearest neighbors. Within it, the cross-modal medical hashing has attracted an increasing attention in facilitating efficiently clinical decision. However, there are still two main challenges in weak multi-manifold structure perseveration across multiple modalities and weak discriminability of hash code. Specifically, existing cross-modal hashing methods focus on pairwise relations within two modalities, and ignore underlying multi-manifold structures across over 2 modalities. Then, there is little consideration about discriminability, i.e., any pair of hash codes should be different. In this paper, we propose a novel hashing method named multi-manifold deep discriminative cross-modal hashing (MDDCH) for large-scale medical image retrieval. The key point is multi-modal manifold similarity which integrates multiple sub-manifolds defined on heterogeneous data to preserve correlation among instances, and it can be measured by three-step connection on corresponding hetero-manifold. Then, we propose discriminative item to make each hash code encoded by hash functions be different, which improves discriminative performance of hash code. Besides, we introduce Gaussian-binary Restricted Boltzmann Machine to directly output hash codes without using any continuous relaxation. Experiments on three benchmark datasets (AIBL, Brain and SPLP) show that our proposed MDDCH achieves comparative performance to recent state-of-the-art hashing methods. Additionally, diagnostic evaluation from professional physicians shows that all the retrieved medical images describe the same object and illness as the queried image.

摘要

受益于低存储成本和高检索效率,哈希学习已成为一种广泛使用的检索技术,用于近似最近邻。在哈希学习中,跨模态医学哈希吸引了越来越多的关注,有助于提高临床决策的效率。然而,在跨多个模态保持弱多流形结构和弱散列码可辨别性方面仍然存在两个主要挑战。具体来说,现有的跨模态哈希方法侧重于两个模态内部的成对关系,而忽略了超过两个模态之间的潜在多流形结构。然后,几乎没有考虑到可辨别性,即任何一对散列码都应该是不同的。在本文中,我们提出了一种名为多流形深度判别式跨模态哈希(MDDCH)的新哈希方法,用于大规模医学图像检索。关键点是多模态流形相似性,它集成了在异构数据上定义的多个子流形,以保留实例之间的相关性,并且可以通过相应的异流形上的三步连接来衡量。然后,我们提出了判别项,使哈希函数编码的每个散列码都不同,从而提高了散列码的判别性能。此外,我们引入了高斯二进制受限玻尔兹曼机,可以直接输出散列码,而无需使用任何连续松弛。在三个基准数据集(AIBL、Brain 和 SPLP)上的实验表明,我们提出的 MDDCH 达到了与最新最先进的哈希方法相当的性能。此外,专业医生的诊断评估表明,所有检索到的医学图像都描述了与查询图像相同的对象和疾病。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验