IEEE Trans Pattern Anal Mach Intell. 2018 Feb;40(2):437-451. doi: 10.1109/TPAMI.2017.2666812. Epub 2017 Feb 9.
This paper presents a simple yet effective supervised deep hash approach that constructs binary hash codes from labeled data for large-scale image search. We assume that the semantic labels are governed by several latent attributes with each attribute on or off, and classification relies on these attributes. Based on this assumption, our approach, dubbed supervised semantics-preserving deep hashing (SSDH), constructs hash functions as a latent layer in a deep network and the binary codes are learned by minimizing an objective function defined over classification error and other desirable hash codes properties. With this design, SSDH has a nice characteristic that classification and retrieval are unified in a single learning model. Moreover, SSDH performs joint learning of image representations, hash codes, and classification in a point-wised manner, and thus is scalable to large-scale datasets. SSDH is simple and can be realized by a slight enhancement of an existing deep architecture for classification; yet it is effective and outperforms other hashing approaches on several benchmarks and large datasets. Compared with state-of-the-art approaches, SSDH achieves higher retrieval accuracy, while the classification performance is not sacrificed.
本文提出了一种简单而有效的有监督深度哈希方法,该方法从带标签的数据中构建二进制哈希码,用于大规模图像搜索。我们假设语义标签由几个潜在属性控制,每个属性要么开启要么关闭,分类依赖于这些属性。基于这一假设,我们的方法称为有监督语义保持深度哈希(SSDH),它将哈希函数构建为深度网络中的一个潜在层,通过最小化定义在分类错误和其他理想哈希码属性上的目标函数来学习二进制码。通过这种设计,SSDH 具有一个很好的特点,即分类和检索在单个学习模型中统一。此外,SSDH 以点的方式联合学习图像表示、哈希码和分类,因此可扩展到大规模数据集。SSDH 简单,只需对现有的分类深度架构进行轻微增强即可实现;然而,它是有效的,并在几个基准和大规模数据集上优于其他哈希方法。与最先进的方法相比,SSDH 实现了更高的检索准确性,同时不会牺牲分类性能。