Suppr超能文献

急诊患者诊疗记录与编码:电子病历准确性评估

Documentation and coding of ED patient encounters: an evaluation of the accuracy of an electronic medical record.

作者信息

Silfen Eric

机构信息

Department of Biomedical Informatics, Columbia University, New York, NY 10032, USA.

出版信息

Am J Emerg Med. 2006 Oct;24(6):664-78. doi: 10.1016/j.ajem.2006.02.005.

Abstract

OBJECTIVE

The aim of the study was to describe a paper-based, template-driven and an electronic medical record used for capturing emergency care clinical information and to compare the accuracy of these documentation systems for coding patient encounters using the American Medical Association Current Procedural Terminology-2004 (AMA CPT-2004) evaluation and management codes intended for provider reimbursement.

METHODS

A retrospective, cross-sectional study of 4-consecutive-day samples of ED patient encounter records from 2 similar community hospitals was done. For clinical documentation, hospital A uses an electronic medical record, whereas hospital B uses a paper-based template-driven record. Using a simple analytic model, expert coders A and B, respectively, coded the records from hospitals A and B for completeness. First, power analysis determined the acceptability of the patient record sample sizes (1 - beta = .90 at 1% significance level), and the frequency of AMA CPT-2004 primary evaluation and management codes 99281 through 99285 was calculated. Second, the completeness discrepancy rates for hospitals A and B were compared to determine the accuracy of both the paper-based, template-driven record and the electronic medical record in documenting and representing the clinical encounter. Third, interrater reliability between expert coders A and B was calculated to assess the level of agreement between each expert coder in determining the completeness discrepancy rates between hospitals A and B. Finally, the frequency of primary evaluation and management codes was analyzed to determine if there was a statistically significant difference between the paper-based, template-driven record and the electronic medical record representation of the clinical information, and if that difference could be attributable to the differing clinical documentation systems used in hospitals A and B.

RESULTS

First, descriptive display demonstrated a difference in the frequency of the primary evaluation and management codes 99283 and 99284 within hospital A (expert coder A assessment, 36.1% vs 39.1%; expert coder B assessment, 36.6% vs 38.7%) and hospital B (expert coder A assessment, 47.8% vs 21.9%; expert coder B assessment, 48.6% vs 21.4%) was noted with the median, primary evaluation, and management code for hospital A of 99284 and the median, primary evaluation, and management code for hospital B of 99283. Second, Fisher exact test compared the completeness discrepancy rates between hospitals A and B as assessed by each expert coder and demonstrated no statistically significant difference in the completeness discrepancy rates (accuracy) between the paper-based, template-driven record and the electronic medical record documentation and coding system when assessed by either expert coder A (P = .370) or expert coder B (P = .819). Third, interrater reliability between expert coders A and B was evaluated using Cohen's kappa statistic. When evaluated both individually and jointly with respect to hospitals A and B, expert coders A and B had a good strength of agreement in their assessments of the accuracy of the documentation and coding system for hospital A (kappa = 0.6200) and hospital B (kappa = 0.6906) as well as for both hospitals evaluated together (kappa = 0.6616). Finally, interhospital differences in the frequency of primary evaluation and management codes were evaluated using Pearson chi(2) test with 3 df. The results for expert coder A (chi(2) = 47.4160; P < .001) and expert coder B (chi(2) = 46.5946; P < .001) recognize that there is a statistically significant degree of difference between hospitals A and B in the frequency distribution of primary evaluation and management codes, probably because of the dispersion of codes 99283 and 99284.

CONCLUSIONS

A keystroke-driven, electronic medical record that resides on a knowledge platform that incorporates a clinical structured terminology, administrative coding schemata, AMA CPT-2004 codes and uses object-oriented, open-ended, branching chain clinical algorithms that "force" physician documentation of the clinical elements provides an equally accurate capture and representation of ED clinical encounter data as a paper-based, template-driven documentation system both in terms of the presence or absence of both the medically necessary, discrete data elements and the textual documentation-dependent, medical decision-making elements.

摘要

目的

本研究旨在描述一种用于记录急诊护理临床信息的纸质模板驱动记录和电子病历,并比较这些文档系统使用美国医学协会现行程序术语-2004(AMA CPT-2004)评估和管理代码对患者就诊进行编码的准确性,这些代码用于提供者报销。

方法

对两家类似社区医院连续4天的急诊患者就诊记录样本进行回顾性横断面研究。对于临床文档,医院A使用电子病历,而医院B使用纸质模板驱动记录。使用简单分析模型,专家编码员A和B分别对医院A和B的记录进行完整性编码。首先,功效分析确定患者记录样本量的可接受性(在1%显著性水平下1-β=0.90),并计算AMA CPT-2004主要评估和管理代码99281至99285的频率。其次,比较医院A和B的完整性差异率,以确定纸质模板驱动记录和电子病历在记录和呈现临床就诊方面的准确性。第三,计算专家编码员A和B之间的评分者间信度,以评估每个专家编码员在确定医院A和B之间的完整性差异率时的一致程度。最后,分析主要评估和管理代码的频率,以确定纸质模板驱动记录和电子病历对临床信息的呈现之间是否存在统计学显著差异,以及该差异是否可归因于医院A和B使用的不同临床文档系统。

结果

首先,描述性展示表明,医院A(专家编码员A评估,36.1%对39.1%;专家编码员B评估,36.6%对38.7%)和医院B(专家编码员A评估,47.8%对21.9%;专家编码员B评估,48.6%对21.4%)内主要评估和管理代码99283和99284的频率存在差异,医院A的中位数、主要评估和管理代码为99284,医院B的为99283。其次,Fisher精确检验比较了每个专家编码员评估的医院A和B之间的完整性差异率,结果表明,在专家编码员A(P = 0.370)或专家编码员B(P = 0.819)评估时,纸质模板驱动记录和电子病历文档及编码系统之间的完整性差异率(准确性)无统计学显著差异。第三,使用Cohen's kappa统计量评估专家编码员A和B之间的评分者间信度。在分别和联合评估医院A和B时,专家编码员A和B在评估医院A(kappa = 0.6200)和医院B(kappa = 0.6906)以及一起评估两家医院(kappa = 0.6616)的文档和编码系统准确性方面具有良好的一致性强度。最后,使用自由度为3的Pearson卡方检验评估主要评估和管理代码频率的医院间差异。专家编码员A(卡方 = 47.4160;P < 0.001)和专家编码员B(卡方 = 46.5946;P < 0.001)的结果表明,医院A和B在主要评估和管理代码的频率分布上存在统计学显著差异程度,可能是由于代码99283和99284的分散。

结论

基于按键驱动的电子病历驻留在一个知识平台上,该平台包含临床结构化术语、行政编码模式、AMA CPT-2004代码,并使用面向对象、开放式、分支链临床算法,“强制”医生记录临床要素,在医学必需的离散数据元素以及文本记录相关的医学决策元素的存在与否方面,与纸质模板驱动文档系统一样,能同样准确地捕获和呈现急诊临床就诊数据。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验