Department of Computer Science, Dartmouth College, Hanover, NH 03755, USA.
Department of Biomedical Data Science, Geisel School of Medicine at Dartmouth, Hanover, NH 03755, USA.
J Biomed Inform. 2020 Nov;111:103581. doi: 10.1016/j.jbi.2020.103581. Epub 2020 Oct 1.
Currently, a major limitation for natural language processing (NLP) analyses in clinical applications is that concepts are not effectively referenced in various forms across different texts. This paper introduces Multi-Ontology Refined Embeddings (MORE), a novel hybrid framework that incorporates domain knowledge from multiple ontologies into a distributional semantic model, learned from a corpus of clinical text.
We use the RadCore and MIMIC-III free-text datasets for the corpus-based component of MORE. For the ontology-based part, we use the Medical Subject Headings (MeSH) ontology and three state-of-the-art ontology-based similarity measures. In our approach, we propose a new learning objective, modified from the sigmoid cross-entropy objective function.
We used two established datasets of semantic similarities among biomedical concept pairs to evaluate the quality of the generated word embeddings. On the first dataset with 29 concept pairs, with similarity scores established by physicians and medical coders, MORE's similarity scores have the highest combined correlation (0.633), which is 5.0% higher than that of the baseline model, and 12.4% higher than that of the best ontology-based similarity measure. On the second dataset with 449 concept pairs, MORE's similarity scores have a correlation of 0.481, based on the average of four medical residents' similarity ratings, and that outperforms the skip-gram model by 8.1%, and the best ontology measure by 6.9%. Furthermore, MORE outperforms three pre-trained transformer-based word embedding models (i.e., BERT, ClinicalBERT, and BioBERT) on both datasets.
MORE incorporates knowledge from several biomedical ontologies into an existing corpus-based distributional semantics model, improving both the accuracy of the learned word embeddings and the extensibility of the model to a broader range of biomedical concepts. MORE allows for more accurate clustering of concepts across a wide range of applications, such as analyzing patient health records to identify subjects with similar pathologies, or integrating heterogeneous clinical data to improve interoperability between hospitals.
目前,自然语言处理(NLP)在临床应用中的一个主要局限性是,在不同的文本中,概念无法以各种形式有效地引用。本文介绍了多本体精炼嵌入(MORE),这是一种新颖的混合框架,将来自多个本体的领域知识纳入从临床文本语料库中学习的分布语义模型。
我们使用 RadCore 和 MIMIC-III 自由文本数据集作为 MORE 的语料库组件。对于基于本体的部分,我们使用了医学主题词(MeSH)本体和三种最先进的基于本体的相似性度量。在我们的方法中,我们提出了一个新的学习目标,该目标是从 sigmoid 交叉熵目标函数修改而来的。
我们使用了两个已建立的生物医学概念对语义相似性数据集来评估生成的词嵌入质量。在第一个包含 29 对概念的数据集上,使用医生和医疗编码员建立的相似性评分,MORE 的相似性评分具有最高的综合相关性(0.633),比基线模型高 5.0%,比最佳基于本体的相似性度量高 12.4%。在第二个包含 449 对概念的数据集上,MORE 的相似性评分基于四位住院医生相似性评分的平均值为 0.481,优于 skip-gram 模型 8.1%,优于最佳本体度量 6.9%。此外,MORE 在两个数据集上都优于三个预先训练的基于转换器的词嵌入模型(即 BERT、ClinicalBERT 和 BioBERT)。
MORE 将来自几个生物医学本体的知识纳入现有的基于语料库的分布语义模型中,提高了学习到的词嵌入的准确性和模型对更广泛的生物医学概念的可扩展性。MORE 允许在更广泛的应用中更准确地对概念进行聚类,例如分析患者健康记录以识别具有相似病理的患者,或整合异构临床数据以提高医院之间的互操作性。