Zhou Sitong, Lybarger Kevin, Yetisgen Meliha, Ostendorf Mari
University of Washington, Seattle, WA, USA.
George Mason University, Fairfax, VA, USA.
AMIA Jt Summits Transl Sci Proc. 2023 Jun 16;2023:622-631. eCollection 2023.
Symptom information is primarily documented in free-text clinical notes and is not directly accessible for downstream applications. To address this challenge, information extraction approaches that can handle clinical language variation across different institutions and specialties are needed. In this paper, we present domain generalization for symptom extraction using pretraining and fine-tuning data that differs from the target domain in terms of institution and/or specialty and patient population. We extract symptom events using a transformer-based joint entity and relation extraction method. To reduce reliance on domain-specific features, we propose a domain generalization method that dynamically masks frequent symptoms words in the source domain. Additionally, we pretrain the transformer language model (LM) on task-related unlabeled texts for better representation. Our experiments indicate that masking and adaptive pretraining methods can significantly improve performance when the source domain is more distant from the target domain.
症状信息主要记录在自由文本临床笔记中,下游应用无法直接获取。为应对这一挑战,需要能够处理不同机构和专业间临床语言差异的信息提取方法。在本文中,我们提出了一种用于症状提取的领域泛化方法,该方法使用在机构和/或专业以及患者群体方面与目标领域不同的预训练和微调数据。我们使用基于Transformer的联合实体和关系提取方法来提取症状事件。为减少对领域特定特征的依赖,我们提出一种领域泛化方法,该方法动态屏蔽源领域中频繁出现的症状词。此外,我们在与任务相关的未标记文本上对Transformer语言模型(LM)进行预训练,以获得更好的表示。我们的实验表明,当源领域与目标领域的距离更远时,屏蔽和自适应预训练方法可以显著提高性能。