• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于跨模态检索的基于三元组的深度哈希网络。

Triplet-Based Deep Hashing Network for Cross-Modal Retrieval.

作者信息

Deng Cheng, Chen Zhaojia, Liu Xianglong, Gao Xinbo, Tao Dacheng

出版信息

IEEE Trans Image Process. 2018 Aug;27(8):3893-3903. doi: 10.1109/TIP.2018.2821921. Epub 2018 Apr 4.

DOI:10.1109/TIP.2018.2821921
PMID:29993656
Abstract

Given the benefits of its low storage requirements and high retrieval efficiency, hashing has recently received increasing attention. In particular, cross-modal hashing has been widely and successfully used in multimedia similarity search applications. However, almost all existing methods employing cross-modal hashing cannot obtain powerful hash codes due to their ignoring the relative similarity between heterogeneous data that contains richer semantic information, leading to unsatisfactory retrieval performance. In this paper, we propose a tripletbased deep hashing (TDH) network for cross-modal retrieval. First, we utilize the triplet labels, which describes the relative relationships among three instances as supervision in order to capture more general semantic correlations between cross-modal instances. We then establish a loss function from the inter-modal view and the intra-modal view to boost the discriminative abilities of the hash codes. Finally, graph regularization is introduced into our proposed TDH method to preserve the original semantic similarity between hash codes in Hamming space. Experimental results show that our proposed method outperforms several state-of-the-art approaches on two popular cross-modal datasets.

摘要

鉴于哈希具有低存储需求和高检索效率的优点,近年来受到了越来越多的关注。特别是,跨模态哈希已被广泛且成功地应用于多媒体相似性搜索应用中。然而,几乎所有现有的采用跨模态哈希的方法都无法获得强大的哈希码,因为它们忽略了包含更丰富语义信息的异构数据之间的相对相似性,导致检索性能不尽人意。在本文中,我们提出了一种用于跨模态检索的基于三元组的深度哈希(TDH)网络。首先,我们利用描述三个实例之间相对关系的三元组标签作为监督,以便捕获跨模态实例之间更一般的语义相关性。然后,我们从模态间视角和模态内视角建立一个损失函数,以增强哈希码的判别能力。最后,将图正则化引入我们提出的TDH方法中,以保持汉明空间中哈希码之间的原始语义相似性。实验结果表明,我们提出的方法在两个流行的跨模态数据集上优于几种现有技术方法。

相似文献

1
Triplet-Based Deep Hashing Network for Cross-Modal Retrieval.用于跨模态检索的基于三元组的深度哈希网络。
IEEE Trans Image Process. 2018 Aug;27(8):3893-3903. doi: 10.1109/TIP.2018.2821921. Epub 2018 Apr 4.
2
Deep Semantic-Preserving Ordinal Hashing for Cross-Modal Similarity Search.用于跨模态相似性搜索的深度语义保持序数哈希
IEEE Trans Neural Netw Learn Syst. 2019 May;30(5):1429-1440. doi: 10.1109/TNNLS.2018.2869601. Epub 2018 Oct 1.
3
Quadruplet-Based Deep Cross-Modal Hashing.四元组深度学习跨模态哈希。
Comput Intell Neurosci. 2021 Jul 2;2021:9968716. doi: 10.1155/2021/9968716. eCollection 2021.
4
Hierarchical Recurrent Neural Hashing for Image Retrieval With Hierarchical Convolutional Features.基于层次卷积特征的层次递归神经网络哈希图像检索
IEEE Trans Image Process. 2018;27(1):106-120. doi: 10.1109/TIP.2017.2755766.
5
Label Consistent Matrix Factorization Hashing for Large-Scale Cross-Modal Similarity Search.用于大规模跨模态相似性搜索的标签一致矩阵分解哈希算法
IEEE Trans Pattern Anal Mach Intell. 2019 Oct;41(10):2466-2479. doi: 10.1109/TPAMI.2018.2861000. Epub 2018 Jul 30.
6
Random Online Hashing for Cross-Modal Retrieval.用于跨模态检索的随机在线哈希
IEEE Trans Neural Netw Learn Syst. 2025 Jan;36(1):677-691. doi: 10.1109/TNNLS.2023.3330975. Epub 2025 Jan 7.
7
Unsupervised Semantic-Preserving Adversarial Hashing for Image Search.用于图像搜索的无监督语义保持对抗哈希
IEEE Trans Image Process. 2019 Aug;28(8):4032-4044. doi: 10.1109/TIP.2019.2903661. Epub 2019 Mar 13.
8
Multimodal Discriminative Binary Embedding for Large-Scale Cross-Modal Retrieval.多模态判别式二值嵌入的大规模跨模态检索。
IEEE Trans Image Process. 2016 Oct;25(10):4540-54. doi: 10.1109/TIP.2016.2592800. Epub 2016 Jul 18.
9
Supervised Matrix Factorization Hashing for Cross-Modal Retrieval.监督矩阵分解哈希用于跨模态检索。
IEEE Trans Image Process. 2016 Jul;25(7):3157-3166. doi: 10.1109/TIP.2016.2564638. Epub 2016 May 6.
10
Hierarchical semantic interaction-based deep hashing network for cross-modal retrieval.基于层次语义交互的深度哈希网络用于跨模态检索。
PeerJ Comput Sci. 2021 May 25;7:e552. doi: 10.7717/peerj-cs.552. eCollection 2021.

引用本文的文献

1
An Innovative Attention-based Triplet Deep Hashing Approach to Retrieve Histopathology Images.一种基于注意力的创新三元组深度哈希方法用于检索组织病理学图像。
J Imaging Inform Med. 2024 Nov 11. doi: 10.1007/s10278-024-01310-8.
2
Object-Level Visual-Text Correlation Graph Hashing for Unsupervised Cross-Modal Retrieval.用于无监督跨模态检索的对象级视觉-文本关联图哈希
Sensors (Basel). 2022 Apr 11;22(8):2921. doi: 10.3390/s22082921.
3
Deep Disentangled Hashing with Momentum Triplets for Neuroimage Search.用于神经图像搜索的带动量三元组的深度解缠哈希
Med Image Comput Comput Assist Interv. 2020;12261:191-201. doi: 10.1007/978-3-030-59710-8_19. Epub 2020 Sep 29.
4
Deep Unsupervised Hashing for Large-Scale Cross-Modal Retrieval Using Knowledge Distillation Model.基于知识蒸馏模型的大规模跨模态检索的深度无监督哈希。
Comput Intell Neurosci. 2021 Jul 17;2021:5107034. doi: 10.1155/2021/5107034. eCollection 2021.
5
Quadruplet-Based Deep Cross-Modal Hashing.四元组深度学习跨模态哈希。
Comput Intell Neurosci. 2021 Jul 2;2021:9968716. doi: 10.1155/2021/9968716. eCollection 2021.
6
Hierarchical semantic interaction-based deep hashing network for cross-modal retrieval.基于层次语义交互的深度哈希网络用于跨模态检索。
PeerJ Comput Sci. 2021 May 25;7:e552. doi: 10.7717/peerj-cs.552. eCollection 2021.
7
Cross-modal representation alignment of molecular structure and perturbation-induced transcriptional profiles.跨模态分子结构与扰动诱导转录谱的表示对齐。
Pac Symp Biocomput. 2021;26:273-284.
8
Cross-Modal Search for Social Networks via Adversarial Learning.基于对抗学习的社交网络跨模态检索。
Comput Intell Neurosci. 2020 Jul 11;2020:7834953. doi: 10.1155/2020/7834953. eCollection 2020.
9
Triplet Deep Hashing with Joint Supervised Loss Based on Deep Neural Networks.基于深度神经网络的联合监督损失三重深度哈希。
Comput Intell Neurosci. 2019 Oct 9;2019:8490364. doi: 10.1155/2019/8490364. eCollection 2019.