Jones B E, South B R, Shao Y, Lu C C, Leng J, Sauer B C, Gundlapalli A V, Samore M H, Zeng Q
Appl Clin Inform. 2018 Jan;9(1):122-128. doi: 10.1055/s-0038-1626725. Epub 2018 Feb 21.
Identifying pneumonia using diagnosis codes alone may be insufficient for research on clinical decision making. Natural language processing (NLP) may enable the inclusion of cases missed by diagnosis codes.
This article (1) develops a NLP tool that identifies the clinical assertion of pneumonia from physician emergency department (ED) notes, and (2) compares classification methods using diagnosis codes versus NLP against a gold standard of manual chart review to identify patients initially treated for pneumonia.
Among a national population of ED visits occurring between 2006 and 2012 across the Veterans Affairs health system, we extracted 811 physician documents containing search terms for pneumonia for training, and 100 random documents for validation. Two reviewers annotated span- and document-level classifications of the clinical assertion of pneumonia. An NLP tool using a support vector machine was trained on the enriched documents. We extracted diagnosis codes assigned in the ED and upon hospital discharge and calculated performance characteristics for diagnosis codes, NLP, and NLP plus diagnosis codes against manual review in training and validation sets.
Among the training documents, 51% contained clinical assertions of pneumonia; in the validation set, 9% were classified with pneumonia, of which 100% contained pneumonia search terms. After enriching with search terms, the NLP system alone demonstrated a recall/sensitivity of 0.72 (training) and 0.55 (validation), and a precision/positive predictive value (PPV) of 0.89 (training) and 0.71 (validation). ED-assigned diagnostic codes demonstrated lower recall/sensitivity (0.48 and 0.44) but higher precision/PPV (0.95 in training, 1.0 in validation); the NLP system identified more "possible-treated" cases than diagnostic coding. An approach combining NLP and ED-assigned diagnostic coding classification achieved the best performance (sensitivity 0.89 and PPV 0.80).
System-wide application of NLP to clinical text can increase capture of initial diagnostic hypotheses, an important inclusion when studying diagnosis and clinical decision-making under uncertainty.
仅使用诊断代码来识别肺炎可能不足以用于临床决策研究。自然语言处理(NLP)或许能够纳入那些被诊断代码遗漏的病例。
本文(1)开发一种NLP工具,该工具可从医生急诊科(ED)记录中识别肺炎的临床诊断,(2)将使用诊断代码与NLP的分类方法与人工病历审查的金标准进行比较,以识别最初接受肺炎治疗的患者。
在退伍军人事务医疗系统2006年至2012年期间全国范围内的急诊科就诊人群中,我们提取了811份包含肺炎搜索词的医生文档用于训练,并提取了100份随机文档用于验证。两名审阅者对肺炎临床诊断的跨度和文档级分类进行了标注。使用支持向量机的NLP工具在经过丰富的文档上进行训练。我们提取了在急诊科和出院时分配的诊断代码,并计算了诊断代码、NLP以及NLP加诊断代码在训练集和验证集中相对于人工审查的性能特征。
在训练文档中,51%包含肺炎的临床诊断;在验证集中,9%被归类为肺炎,其中100%包含肺炎搜索词。在使用搜索词进行丰富后,仅NLP系统的召回率/敏感度在训练时为0.72,在验证时为0.55,精确率/阳性预测值(PPV)在训练时为0.89,在验证时为0.71。急诊科分配的诊断代码显示出较低的召回率/敏感度(分别为0.48和0.44),但精确率/PPV较高(训练时为0.95,验证时为1.0);NLP系统识别出的“可能接受治疗”的病例比诊断编码更多。一种将NLP和急诊科分配的诊断编码分类相结合的方法取得了最佳性能(敏感度为0.89,PPV为0.80)。
在全系统将NLP应用于临床文本可以增加对初始诊断假设的捕捉,这在研究不确定性下的诊断和临床决策时是一个重要的纳入因素。