Suppr超能文献

通过联邦学习训练用于国际疾病分类第10次修订版分类的深度情境化语言模型:模型开发与验证研究

Training a Deep Contextualized Language Model for International Classification of Diseases, 10th Revision Classification via Federated Learning: Model Development and Validation Study.

作者信息

Chen Pei-Fu, He Tai-Liang, Lin Sheng-Che, Chu Yuan-Chia, Kuo Chen-Tsung, Lai Feipei, Wang Ssu-Ming, Zhu Wan-Xuan, Chen Kuan-Chih, Kuo Lu-Cheng, Hung Fang-Ming, Lin Yu-Cheng, Tsai I-Chang, Chiu Chi-Hao, Chang Shu-Chih, Yang Chi-Yu

机构信息

Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, Taiwan.

Department of Anesthesiology, Far Eastern Memorial Hospital, New Taipei City, Taiwan.

出版信息

JMIR Med Inform. 2022 Nov 10;10(11):e41342. doi: 10.2196/41342.

Abstract

BACKGROUND

The automatic coding of clinical text documents by using the International Classification of Diseases, 10th Revision (ICD-10) can be performed for statistical analyses and reimbursements. With the development of natural language processing models, new transformer architectures with attention mechanisms have outperformed previous models. Although multicenter training may increase a model's performance and external validity, the privacy of clinical documents should be protected. We used federated learning to train a model with multicenter data, without sharing data per se.

OBJECTIVE

This study aims to train a classification model via federated learning for ICD-10 multilabel classification.

METHODS

Text data from discharge notes in electronic medical records were collected from the following three medical centers: Far Eastern Memorial Hospital, National Taiwan University Hospital, and Taipei Veterans General Hospital. After comparing the performance of different variants of bidirectional encoder representations from transformers (BERT), PubMedBERT was chosen for the word embeddings. With regard to preprocessing, the nonalphanumeric characters were retained because the model's performance decreased after the removal of these characters. To explain the outputs of our model, we added a label attention mechanism to the model architecture. The model was trained with data from each of the three hospitals separately and via federated learning. The models trained via federated learning and the models trained with local data were compared on a testing set that was composed of data from the three hospitals. The micro F score was used to evaluate model performance across all 3 centers.

RESULTS

The F scores of PubMedBERT, RoBERTa (Robustly Optimized BERT Pretraining Approach), ClinicalBERT, and BioBERT (BERT for Biomedical Text Mining) were 0.735, 0.692, 0.711, and 0.721, respectively. The F score of the model that retained nonalphanumeric characters was 0.8120, whereas the F score after removing these characters was 0.7875-a decrease of 0.0245 (3.11%). The F scores on the testing set were 0.6142, 0.4472, 0.5353, and 0.2522 for the federated learning, Far Eastern Memorial Hospital, National Taiwan University Hospital, and Taipei Veterans General Hospital models, respectively. The explainable predictions were displayed with highlighted input words via the label attention architecture.

CONCLUSIONS

Federated learning was used to train the ICD-10 classification model on multicenter clinical text while protecting data privacy. The model's performance was better than that of models that were trained locally.

摘要

背景

使用国际疾病分类第10版(ICD - 10)对临床文本文件进行自动编码可用于统计分析和报销。随着自然语言处理模型的发展,具有注意力机制的新型变压器架构已超越先前的模型。尽管多中心训练可能会提高模型的性能和外部有效性,但临床文件的隐私应得到保护。我们使用联邦学习来训练一个基于多中心数据的模型,而不直接共享数据本身。

目的

本研究旨在通过联邦学习训练一个用于ICD - 10多标签分类的模型。

方法

从以下三个医疗中心收集电子病历中出院小结的文本数据:远东纪念医院、台湾大学附属医院和台北荣民总医院。在比较了来自变压器的双向编码器表示(BERT)的不同变体的性能后,选择了PubMedBERT进行词嵌入。关于预处理,保留了非字母数字字符,因为去除这些字符后模型的性能下降。为了解释我们模型的输出,我们在模型架构中添加了标签注意力机制。该模型分别使用来自三家医院的数据以及通过联邦学习进行训练。在由来自三家医院的数据组成的测试集上比较通过联邦学习训练的模型和使用本地数据训练的模型。使用微F分数来评估所有三个中心的模型性能。

结果

PubMedBERT、RoBERTa(稳健优化的BERT预训练方法)、ClinicalBERT和BioBERT(用于生物医学文本挖掘的BERT)的F分数分别为0.735、0.692、0.711和0.721。保留非字母数字字符的模型的F分数为0.8120,而去除这些字符后的F分数为0.7875 - 下降了0.0245(3.11%)。对于联邦学习模型、远东纪念医院模型、台湾大学附属医院模型和台北荣民总医院模型,测试集上的F分数分别为0.6142、0.4472、0.5353和0.2522。通过标签注意力架构以突出显示的输入词展示了可解释的预测。

结论

在保护数据隐私的同时,使用联邦学习在多中心临床文本上训练ICD - 10分类模型。该模型的性能优于在本地训练的模型。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验