The Abigail Wexner Research Institute, Nationwide Children's Hospital, Columbus, Ohio, United States.
Pediatric Dentistry, Nationwide Children's Hospital, Columbus, Ohio, United States.
Methods Inf Med. 2022 Dec;61(5-06):195-200. doi: 10.1055/a-1900-7351. Epub 2022 Jul 14.
Generative pretrained transformer (GPT) models are one of the latest large pretrained natural language processing models that enables model training with limited datasets and reduces dependency on large datasets, which are scarce and costly to establish and maintain. There is a rising interest to explore the use of GPT models in health care.
We investigate the performance of GPT-2 and GPT-Neo models for medical text prediction using 374,787 free-text dental notes.
We fine-tune pretrained GPT-2 and GPT-Neo models for next word prediction on a dataset of over 374,000 manually written sections of dental clinical notes. Each model was trained on 80% of the dataset, validated on 10%, and tested on the remaining 10%. We report model performance in terms of next word prediction accuracy and loss. Additionally, we analyze the performance of the models on different types of prediction tokens for categories. For comparison, we also fine-tuned a non-GPT pretrained neural network model, XLNet (large), for next word prediction. We annotate each token in 100 randomly sampled notes by category (e.g., names, abbreviations, clinical terms, punctuation, etc.) and compare the performance of each model by token category.
Models present acceptable accuracy scores (GPT-2: 76%; GPT-Neo: 53%), and the GPT-2 model also performs better in manual evaluations, especially for names, abbreviations, and punctuation. Both GPT models outperformed XLNet in terms of accuracy. The results suggest that pretrained models have the potential to assist medical charting in the future. We share the lessons learned, insights, and suggestions for future implementations.
The results suggest that pretrained models have the potential to assist medical charting in the future. Our study presented one of the first implementations of the GPT model used with medical notes.
生成式预训练转换器(GPT)模型是最新的大型预训练自然语言处理模型之一,它可以在数据集有限的情况下进行模型训练,并减少对稀缺且建立和维护成本高的大型数据集的依赖。人们越来越有兴趣探索在医疗保健中使用 GPT 模型。
我们使用 374787 份自由文本牙科记录来研究 GPT-2 和 GPT-Neo 模型在医疗文本预测方面的性能。
我们在超过 374000 份人工书写的牙科临床记录片段数据集上对预训练的 GPT-2 和 GPT-Neo 模型进行微调,以进行下一个单词预测。每个模型在数据集的 80%上进行训练,在 10%上进行验证,在剩余的 10%上进行测试。我们报告模型在预测准确性和损失方面的性能。此外,我们还分析了模型在不同类型的预测标记方面对类别的性能。为了比较,我们还对预训练的神经网络模型 XLNet(大型)进行了微调,用于下一个单词预测。我们按类别(例如,名称、缩写、临床术语、标点符号等)对 100 个随机抽样记录中的每个标记进行注释,并通过标记类别比较每个模型的性能。
模型表现出可接受的准确率(GPT-2:76%;GPT-Neo:53%),并且 GPT-2 模型在人工评估中表现更好,特别是在名称、缩写和标点符号方面。在准确性方面,GPT 模型均优于 XLNet。结果表明,预训练模型具有在未来协助医疗记录的潜力。我们分享了经验教训、见解和对未来实施的建议。
结果表明,预训练模型具有在未来协助医疗记录的潜力。我们的研究首次实现了将 GPT 模型应用于医疗记录。