Suppr超能文献

使用深度学习集成和微调大语言模型改进实体识别:以从VAERS和社交媒体中提取不良事件为例

Improving entity recognition using ensembles of deep learning and fine-tuned large language models: A case study on adverse event extraction from VAERS and social media.

作者信息

Li Yiming, Viswaroopan Deepthi, He William, Li Jianfu, Zuo Xu, Xu Hua, Tao Cui

机构信息

McWilliams School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX 77030, USA.

Department of Electrical & Computer Engineering, Pratt School of Engineering, Duke University, 305 Tower Engineering Building, Durham, NC 27708, USA.

出版信息

J Biomed Inform. 2025 Mar;163:104789. doi: 10.1016/j.jbi.2025.104789. Epub 2025 Feb 7.

Abstract

OBJECTIVE

Adverse event (AE) extraction following COVID-19 vaccines from text data is crucial for monitoring and analyzing the safety profiles of immunizations, identifying potential risks and ensuring the safe use of these products. Traditional deep learning models are adept at learning intricate feature representations and dependencies in sequential data, but often require extensive labeled data. In contrast, large language models (LLMs) excel in understanding contextual information, but exhibit unstable performance on named entity recognition (NER) tasks, possibly due to their broad but unspecific training. This study aims to evaluate the effectiveness of LLMs and traditional deep learning models in AE extraction, and to assess the impact of ensembling these models on performance.

METHODS

In this study, we utilized reports and posts from the Vaccine Adverse Event Reporting System (VAERS) (n = 230), Twitter (n = 3,383), and Reddit (n = 49) as our corpora. Our goal was to extract three types of entities: vaccine, shot, and adverse event (ae). We explored and fine-tuned (except GPT-4) multiple LLMs, including GPT-2, GPT-3.5, GPT-4, Llama-2 7b, and Llama-2 13b, as well as traditional deep learning models like Recurrent neural network (RNN) and Bidirectional Encoder Representations from Transformers for Biomedical Text Mining (BioBERT). To enhance performance, we created ensembles of the three models with the best performance. For evaluation, we used strict and relaxed F1 scores to evaluate the performance for each entity type, and micro-average F1 was used to assess the overall performance.

RESULTS

The ensemble demonstrated the best performance in identifying the entities "vaccine," "shot," and "ae," achieving strict F1-scores of 0.878, 0.930, and 0.925, respectively, and a micro-average score of 0.903. These results underscore the significance of fine-tuning models for specific tasks and demonstrate the effectiveness of ensemble methods in enhancing performance.

CONCLUSION

In conclusion, this study demonstrates the effectiveness and robustness of ensembling fine-tuned traditional deep learning models and LLMs, for extracting AE-related information following COVID-19 vaccination. This study contributes to the advancement of natural language processing in the biomedical domain, providing valuable insights into improving AE extraction from text data for pharmacovigilance and public health surveillance.

摘要

目的

从文本数据中提取新冠疫苗接种后的不良事件(AE)对于监测和分析疫苗的安全性、识别潜在风险以及确保这些产品的安全使用至关重要。传统的深度学习模型擅长学习序列数据中复杂的特征表示和依赖关系,但通常需要大量的标注数据。相比之下,大语言模型(LLM)在理解上下文信息方面表现出色,但在命名实体识别(NER)任务中表现不稳定,这可能是由于其广泛但不具体的训练所致。本研究旨在评估LLM和传统深度学习模型在AE提取中的有效性,并评估将这些模型进行集成对性能的影响。

方法

在本研究中,我们使用了疫苗不良事件报告系统(VAERS)(n = 230)、推特(n = 3383)和Reddit(n = 49)上的报告和帖子作为语料库。我们的目标是提取三种类型的实体:疫苗、注射和不良事件(ae)。我们探索并微调了(GPT - 4除外)多个LLM,包括GPT - 2、GPT - 3.5、GPT - 4、Llama - 2 7b和Llama - 2 13b,以及传统的深度学习模型,如递归神经网络(RNN)和用于生物医学文本挖掘的双向编码器表示(BioBERT)。为了提高性能,我们将性能最佳的三个模型进行了集成。在评估时,我们使用严格和宽松的F1分数来评估每种实体类型的性能,并使用微平均F1分数来评估整体性能。

结果

集成模型在识别“疫苗”“注射”和“ae”实体方面表现最佳,严格F1分数分别达到0.878、0.930和0.925,微平均分数为0.903。这些结果强调了针对特定任务微调模型的重要性,并证明了集成方法在提高性能方面的有效性。

结论

总之,本研究证明了集成微调后的传统深度学习模型和LLM在提取新冠疫苗接种后AE相关信息方面的有效性和稳健性。本研究有助于推动生物医学领域自然语言处理的发展,为改进从文本数据中提取AE以进行药物警戒和公共卫生监测提供了有价值的见解。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验