Simoneit Céline, Heuwieser Wolfgang, Arlt Sebastian P
Freie Universitat Berlin, Berlin, Germany.
J Vet Med Educ. 2012 Summer;39(2):119-27. doi: 10.3138/jvme.1111.113R.
This study's objective was to determine respondents' inter-observer agreement on a detailed checklist to evaluate three exemplars (one case report, one randomized controlled study without blinding, and one blinded, randomized controlled study) of the scientific literature in the field of bovine reproduction. Fourteen international scientists in the field of animal reproduction were provided with the three articles, three copies of the checklist, and a supplementary explanation. Overall, 13 responded to more than 90% of the items. Overall repeatability between respondents using Fleiss's κ was 0.35 (fair agreement). Combining the "strongly agree" and "agree" responses and the "strongly disagree" and "disagree" responses increased κ to 0.49 (moderate agreement). Evaluation of information given in the three articles on housing of the animals (35% identical answers) and preconditions or pretreatments (42%) varied widely. Even though the overall repeatability was fair, repeatability concerning the important categories was high (e.g., level of agreement=98%). Our data show that the checklist is a reasonable and practical supporting tool to assess the quality of publications. Therefore, it may be used in teaching and practicing evidence-based veterinary medicine. It can support training in systematic and critical appraisal of information and in clinical decision making.
本研究的目的是确定受访者在一份详细清单上的观察者间一致性,该清单用于评估牛繁殖领域科学文献的三个范例(一篇病例报告、一篇非盲法随机对照研究和一篇盲法随机对照研究)。向14位动物繁殖领域的国际科学家提供了这三篇文章、三份清单副本以及一份补充说明。总体而言,13位科学家对90%以上的项目做出了回应。使用Fleiss's κ计算得出受访者之间的总体重复性为0.35(一致性尚可)。将“强烈同意”和“同意”的回答以及“强烈不同意”和“不同意”的回答合并后,κ提高到了0.49(中度一致)。对三篇文章中关于动物饲养(相同回答占35%)和前提条件或预处理(42%)的信息评估差异很大。尽管总体重复性尚可,但重要类别的重复性很高(例如,一致程度=98%)。我们的数据表明,该清单是评估出版物质量的合理且实用的辅助工具。因此,它可用于循证兽医学的教学和实践。它可以支持在信息的系统和批判性评估以及临床决策方面的培训。