Annu Int Conf IEEE Eng Med Biol Soc. 2021 Nov;2021:1989-1992. doi: 10.1109/EMBC46164.2021.9630611.
Rapid increase in adoption of electronic health records in health care institutions has motivated the use of entity extraction tools to extract meaningful information from clinical notes with unstructured and narrative style. This paper investigates the performance of two such tools in automatic entity extraction. In specific, this work focuses on automatic medication extraction performance of Amazon Comprehend Medical (ACM) and Clinical Language Annotation, Modeling and Processing (CLAMP) toolkit using 2014 i2b2 NLP challenge dataset and its annotated medical entities. Recall, precision and F-score are used to evaluate the performance of the tools.Clinical Relevance- Majority of data in electronic health records (EHRs) are in the form of free text that features a gold mine of patient's information. While computerized applications in healthcare institutions as well as clinical research leverage structured data. As a result, information hidden in clinical free texts needs to be extracted and formatted as a structured data. This paper evaluates the performance of ACM and CLAMP in automatic entity extraction. The evaluation results show that CLAMP achieves an F-score of 91%, in comparison to an 87% F-score by ACM.
医疗机构中电子健康记录的采用迅速增加,这促使人们使用实体提取工具从具有非结构化和叙述风格的临床记录中提取有意义的信息。本文研究了两种此类工具在自动实体提取中的性能。具体来说,这项工作侧重于使用 2014 年 i2b2 NLP 挑战数据集及其标注的医学实体,评估亚马逊 Comprehend Medical (ACM) 和 Clinical Language Annotation、Modeling and Processing (CLAMP) 工具包在自动药物提取方面的性能。召回率、精度和 F 分数用于评估工具的性能。
临床相关性——电子健康记录 (EHR) 中的大部分数据都是自由文本的形式,其中蕴藏着大量患者信息。虽然医疗机构和临床研究中的计算机应用程序利用结构化数据。因此,需要提取隐藏在临床自由文本中的信息并将其格式化为结构化数据。本文评估了 ACM 和 CLAMP 在自动实体提取中的性能。评估结果表明,CLAMP 的 F 分数达到 91%,而 ACM 的 F 分数为 87%。