Suppr超能文献

一项关于儿童癌症后护理过渡的病历摘要研究的评估者内信度和评估者间信度。

Intra-rater and inter-rater reliability of a medical record abstraction study on transition of care after childhood cancer.

作者信息

Gianinazzi Micòl E, Rueegg Corina S, Zimmerman Karin, Kuehni Claudia E, Michel Gisela

机构信息

Department of Health Sciences and Health Policy, University of Lucerne, Lucerne, Switzerland.

Pediatric Hematology/Oncology, University Children's Hospital, Bern, Switzerland.

出版信息

PLoS One. 2015 May 22;10(5):e0124290. doi: 10.1371/journal.pone.0124290. eCollection 2015.

Abstract

BACKGROUND

The abstraction of data from medical records is a widespread practice in epidemiological research. However, studies using this means of data collection rarely report reliability. Within the Transition after Childhood Cancer Study (TaCC) which is based on a medical record abstraction, we conducted a second independent abstraction of data with the aim to assess a) intra-rater reliability of one rater at two time points; b) the possible learning effects between these two time points compared to a gold-standard; and c) inter-rater reliability.

METHOD

Within the TaCC study we conducted a systematic medical record abstraction in the 9 Swiss clinics with pediatric oncology wards. In a second phase we selected a subsample of medical records in 3 clinics to conduct a second independent abstraction. We then assessed intra-rater reliability at two time points, the learning effect over time (comparing each rater at two time-points with a gold-standard) and the inter-rater reliability of a selected number of variables. We calculated percentage agreement and Cohen's kappa.

FINDINGS

For the assessment of the intra-rater reliability we included 154 records (80 for rater 1; 74 for rater 2). For the inter-rater reliability we could include 70 records. Intra-rater reliability was substantial to excellent (Cohen's kappa 0-6-0.8) with an observed percentage agreement of 75%-95%. In all variables learning effects were observed. Inter-rater reliability was substantial to excellent (Cohen's kappa 0.70-0.83) with high agreement ranging from 86% to 100%.

CONCLUSIONS

Our study showed that data abstracted from medical records are reliable. Investigating intra-rater and inter-rater reliability can give confidence to draw conclusions from the abstracted data and increase data quality by minimizing systematic errors.

摘要

背景

从医疗记录中提取数据是流行病学研究中的一种普遍做法。然而,使用这种数据收集方式的研究很少报告可靠性。在基于医疗记录提取的儿童癌症后过渡研究(TaCC)中,我们进行了第二次独立的数据提取,目的是评估:a)一名评估者在两个时间点的评分者内信度;b)与金标准相比,这两个时间点之间可能存在的学习效应;c)评分者间信度。

方法

在TaCC研究中,我们在瑞士9家设有儿科肿瘤病房的诊所进行了系统的医疗记录提取。在第二阶段,我们从3家诊所选取了一部分医疗记录样本进行第二次独立提取。然后,我们评估了两个时间点的评分者内信度、随时间的学习效应(将每个评估者在两个时间点的情况与金标准进行比较)以及选定数量变量的评分者间信度。我们计算了百分比一致性和科恩kappa系数。

结果

为评估评分者内信度,我们纳入了154份记录(评估者1为80份;评估者2为74份)。为评估评分者间信度,我们纳入了70份记录。评分者内信度较高至优秀(科恩kappa系数0.6 - 0.8),观察到的百分比一致性为75% - 95%。在所有变量中均观察到学习效应。评分者间信度较高至优秀(科恩kappa系数0.70 - 0.83),高度一致性范围为86%至100%。

结论

我们的研究表明,从医疗记录中提取的数据是可靠的。研究评分者内信度和评分者间信度可以增强从提取的数据中得出结论的信心,并通过最小化系统误差来提高数据质量。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d3f9/4441480/8ec53beae9b0/pone.0124290.g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验