National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, MD 20894, USA.
J Biomed Inform. 2017 Nov;75:122-127. doi: 10.1016/j.jbi.2017.09.014. Epub 2017 Oct 3.
The main approach of traditional information retrieval (IR) is to examine how many words from a query appear in a document. A drawback of this approach, however, is that it may fail to detect relevant documents where no or only few words from a query are found. The semantic analysis methods such as LSA (latent semantic analysis) and LDA (latent Dirichlet allocation) have been proposed to address the issue, but their performance is not superior compared to common IR approaches. Here we present a query-document similarity measure motivated by the Word Mover's Distance. Unlike other similarity measures, the proposed method relies on neural word embeddings to compute the distance between words. This process helps identify related words when no direct matches are found between a query and a document. Our method is efficient and straightforward to implement. The experimental results on TREC Genomics data show that our approach outperforms the BM25 ranking function by an average of 12% in mean average precision. Furthermore, for a real-world dataset collected from the PubMed search logs, we combine the semantic measure with BM25 using a learning to rank method, which leads to improved ranking scores by up to 25%. This experiment demonstrates that the proposed approach and BM25 nicely complement each other and together produce superior performance.
传统信息检索 (IR) 的主要方法是检查查询中有多少个单词出现在文档中。然而,这种方法的一个缺点是,它可能无法检测到没有或只有很少查询词的相关文档。已经提出了语义分析方法,例如 LSA(潜在语义分析)和 LDA(潜在狄利克雷分配)来解决这个问题,但它们的性能并不优于常见的 IR 方法。在这里,我们提出了一种基于词移距离的查询-文档相似性度量方法。与其他相似性度量方法不同,所提出的方法依赖于神经词嵌入来计算单词之间的距离。当在查询和文档之间找不到直接匹配时,此过程有助于识别相关单词。我们的方法效率高,易于实现。在 TREC 基因组学数据上的实验结果表明,我们的方法在平均精度方面平均优于 BM25 排序函数 12%。此外,对于从 PubMed 搜索日志中收集的真实数据集,我们使用学习排序方法将语义度量与 BM25 结合使用,这导致排序得分提高了高达 25%。该实验表明,所提出的方法和 BM25 可以很好地互补,共同产生优异的性能。