Singh Arjun, Sartipi Shadi, Sun Haoqi, Milde Rebecca, Turley Niels, Quinn Carson, Harrold G Kyle, Gillani Rebecca L, Turbett Sarah E, Das Sudeshna, Zafar Sahar, Fernandes Marta, Westover M Brandon, Mukerji Shibani S
Department of Neurology, Massachusetts General Hospital, 55 Fruit St, Wang ACC 835, Boston, MA, 02114, United States, 1 2163379887.
Harvard Medical School, Boston, MA, United States.
JMIR Med Inform. 2025 Aug 29;13:e63157. doi: 10.2196/63157.
Identifying neuroinfectious disease (NID) cases using International Classification of Diseases billing codes is often imprecise, while manual chart reviews are labor-intensive. Machine learning models can leverage unstructured electronic health records to detect subtle NID indicators, process large data volumes efficiently, and reduce misclassification. While accurate NID classification is needed for research and clinical decision support, using unstructured notes for this purpose remains underexplored.
The objective of this study is to develop and validate a machine learning model to identify NIDs from unstructured patient notes.
Clinical notes from patients who had undergone lumbar puncture were obtained using the electronic health record of an academic hospital network (Mass General Brigham [MGB]), with half associated with NID-related diagnostic codes. Ground truth was established by chart review with 6 NID-expert physicians. NID keywords were generated with regular expressions, and extracted texts were converted into bag-of-words representations using n-grams (n=1, 2, 3). Notes were randomly split into training (80%), 2400 notes out of 3000, and hold-out testing (20%), 600 notes out of 3000, sets. Feature selection was performed using logistic regression with L1 regularization. An extreme gradient boosting (XGBoost) model classified NID cases, and performance was evaluated using the area under the receiver operating curve (AUROC) and the precision-recall curve (AUPRC). The performance of the natural language processing (NLP) model was contrasted with the Llama 3.2 auto-regressive model on the MGB test set. The NLP model was additionally validated on external data from an independent hospital (Beth Israel Deaconess Medical Center [BIDMC]).
This study included 3000 patient notes from MGB from January 22, 2010, to September 21, 2023. Of 1284 initial n-gram features, 342 were selected, with the most significant features being "meningitis," "ventriculitis," and "meningoencephalitis." The XGBoost model achieved an AUROC of 0.98 (95% CI 0.96-0.99) and AUPRC of 0.89 (95% CI 0.83-0.94) on MGB test data. In comparison, NID identification using International Classification of Diseases billing codes showed high sensitivity (0.97) but poor specificity (0.59), overestimating NID cases. Llama 3.2 improved specificity (0.94) but had low sensitivity (0.64) and an AUROC of 0.80. In contrast, our NLP model balanced specificity (0.96) and sensitivity (0.84), outperforming both methods in accuracy and reliability on MGB data. When tested on external data from BIDMC, the NLP model maintained an AUROC of 0.98 (95% CI 0.96-0.99), with an AUPRC of 0.78 (95% CI 0.66-0.89).
The NLP model accurately identifies NID cases from clinical notes. Validated across 2 independent hospital datasets, the model demonstrates feasibility for large-scale NID research and cohort generation. With further external validation, our results could be more generalizable to other institutions.
使用国际疾病分类计费代码识别神经感染性疾病(NID)病例往往不准确,而人工病历审查则耗费人力。机器学习模型可以利用非结构化电子健康记录来检测细微的NID指标,高效处理大量数据,并减少错误分类。虽然研究和临床决策支持需要准确的NID分类,但利用非结构化记录进行此目的的探索仍不足。
本研究的目的是开发并验证一种机器学习模型,以从非结构化患者记录中识别NID。
使用学术医院网络(麻省总医院布莱根分院[MGB])的电子健康记录获取接受过腰椎穿刺患者的临床记录,其中一半与NID相关诊断代码相关。由6名NID专家医生通过病历审查确定真实情况。使用正则表达式生成NID关键词,并使用n元语法(n = 1、2、3)将提取的文本转换为词袋表示形式。记录被随机分为训练集(80%),即3000条记录中的2400条,和保留测试集(20%),即3000条记录中的600条。使用带有L1正则化的逻辑回归进行特征选择。一个极端梯度提升(XGBoost)模型对NID病例进行分类,并使用受试者工作特征曲线下面积(AUROC)和精确召回率曲线(AUPRC)评估性能。在MGB测试集上,将自然语言处理(NLP)模型的性能与Llama 3.2自回归模型进行对比。NLP模型还在一家独立医院(贝斯以色列女执事医疗中心[BIDMC])的外部数据上进行了验证。
本研究纳入了2010年1月22日至2023年9月21日期间MGB的3000份患者记录。在1284个初始n元语法特征中,选择了342个,其中最显著的特征是“脑膜炎”、“脑室炎”和“脑膜脑炎。XGBoost模型在MGB测试数据上的AUROC为0.98(95%CI 0.96 - 0.99),AUPRC为0.89(95%CI 0.83 - 0.94)。相比之下,使用国际疾病分类计费代码进行NID识别显示出高敏感性(0.97)但特异性差(0.59),高估了NID病例。Llama 3.2提高了特异性(0.94),但敏感性低(0.64),AUROC为0.80。相比之下,我们的NLP模型平衡了特异性(0.96)和敏感性(0.84),在MGB数据的准确性和可靠性方面优于这两种方法。在BIDMC的外部数据上进行测试时,NLP模型的AUROC保持在0.98(95%CI 0.96 - 0.99),AUPRC为0.78(95%CI 0.66 - 0.89).
NLP模型能准确地从临床记录中识别NID病例。该模型在2个独立医院数据集上得到验证,证明了其在大规模NID研究和队列生成中的可行性。经过进一步的外部验证,我们的结果可能更适用于其他机构。