Qiao Fengcai, Wang Cheng, Zhang Xin, Wang Hui
College of Information Systems and Management, National University of Defense Technology, Changsha 410073, China.
ScientificWorldJournal. 2013 Sep 14;2013:795408. doi: 10.1155/2013/795408. eCollection 2013.
Near-duplicate image retrieval is a classical research problem in computer vision toward many applications such as image annotation and content-based image retrieval. On the web, near-duplication is more prevalent in queries for celebrities and historical figures which are of particular interest to the end users. Existing methods such as bag-of-visual-words (BoVW) solve this problem mainly by exploiting purely visual features. To overcome this limitation, this paper proposes a novel text-based data-driven reranking framework, which utilizes textual features and is combined with state-of-art BoVW schemes. Under this framework, the input of the retrieval procedure is still only a query image. To verify the proposed approach, a dataset of 2 million images of 1089 different celebrities together with their accompanying texts is constructed. In addition, we comprehensively analyze the different categories of near duplication observed in our constructed dataset. Experimental results on this dataset show that the proposed framework can achieve higher mean average precision (mAP) with an improvement of 21% on average in comparison with the approaches based only on visual features, while does not notably prolong the retrieval time.
近似重复图像检索是计算机视觉领域中一个经典的研究问题,适用于许多应用场景,如图像标注和基于内容的图像检索。在网络上,近似重复现象在针对名人及历史人物的查询中更为普遍,而这些正是终端用户特别感兴趣的内容。现有的方法,如视觉词袋模型(BoVW),主要通过纯粹利用视觉特征来解决这个问题。为了克服这一局限性,本文提出了一种新颖的基于文本的数据驱动重排框架,该框架利用文本特征,并与最先进的BoVW方案相结合。在这个框架下,检索过程的输入仍然只是一张查询图像。为了验证所提出的方法,构建了一个包含1089位不同名人的200万张图像及其相关文本的数据集。此外,我们全面分析了在我们构建的数据集中观察到的不同类型的近似重复情况。在这个数据集上的实验结果表明,与仅基于视觉特征的方法相比,所提出的框架能够实现更高的平均精度均值(mAP),平均提高了21%,同时不会显著延长检索时间。