Suppr超能文献

大量审稿人推荐意见的一致性——《德国医生报国际版》的同行评审评估。

Substantial agreement of referee recommendations at a general medical journal--a peer review evaluation at Deutsches Ärzteblatt International.

机构信息

Deutsches Ärzteblatt International, Editorial Offices, Cologne, Germany.

出版信息

PLoS One. 2013 May 2;8(5):e61401. doi: 10.1371/journal.pone.0061401. Print 2013.

Abstract

BACKGROUND

Peer review is the mainstay of editorial decision making for medical journals. There is a dearth of evaluations of journal peer review with regard to reliability and validity, particularly in the light of the wide variety of medical journals. Studies carried out so far indicate low agreement among reviewers. We present an analysis of the peer review process at a general medical journal, Deutsches Ärzteblatt International.

METHODOLOGY/PRINCIPAL FINDINGS: 554 reviewer recommendations on 206 manuscripts submitted between 7/2008 and 12/2009 were analyzed: 7% recommended acceptance, 74% revision and 19% rejection. Concerning acceptance (with or without revision) versus rejection, there was a substantial agreement among reviewers (74.3% of pairs of recommendations) that was not reflected by Fleiss' or Cohen's kappa (<0.2). The agreement rate amounted to 84% for acceptance, but was only 31% for rejection. An alternative kappa-statistic, however, Gwet's kappa (AC1), indicated substantial agreement (0.63). Concordance between reviewer recommendation and editorial decision was almost perfect when reviewer recommendations were unanimous. The correlation of reviewer recommendations and citations as counted by Web of Science was low (partial correlation adjusted for year of publication: -0.03, n.s.).

CONCLUSIONS/SIGNIFICANCE: Although our figures are similar to those reported in the literature our conclusion differs from the widely held view that reviewer agreement is low: Based on overall agreement we consider the concordance among reviewers sufficient for the purposes of editorial decision making. We believe that various measures, such as positive and negative agreement or alternative Kappa values are superior to the application of Cohen's or Fleiss' Kappa in the analysis of nominal or ordinal level data regarding reviewer agreement. Also, reviewer recommendations seem to be a poor proxy for citations because, for example, manuscripts will be changed considerably during the revision process.

摘要

背景

同行评议是医学期刊编辑决策的主要依据。目前,关于期刊同行评议的可靠性和有效性评价很少,特别是考虑到种类繁多的医学期刊。迄今为止的研究表明,审稿人之间的一致性较低。我们对一份普通医学期刊《德国医生报国际版》(Deutsches Ärzteblatt International)的同行评议过程进行了分析。

方法/主要发现:对 2008 年 7 月至 2009 年 12 月期间提交的 206 篇手稿中的 554 条审稿人建议进行了分析:7%建议接受,74%建议修改,19%建议拒绝。关于接受(包括修改和不修改)与拒绝,审稿人之间存在实质性一致(74.3%的建议一致),但 Fleiss 或 Cohen 的kappa 值(<0.2)并未反映这一点。接受的一致性率为 84%,但拒绝的一致性率仅为 31%。然而,另一种kappa 统计量,Gwet 的 kappa(AC1),则显示出实质性一致(0.63)。当审稿人建议一致时,审稿人建议与编辑决策之间的一致性几乎是完美的。审稿人建议与 Web of Science 计数的引文之间的相关性较低(调整出版年份后的部分相关系数:-0.03,无统计学意义)。

结论/意义:尽管我们的数据与文献报道的数据相似,但我们的结论与广泛持有的审稿人一致性低的观点不同:基于总体一致性,我们认为审稿人之间的一致性足以满足编辑决策的目的。我们认为,各种措施,如阳性和阴性一致或替代 Kappa 值,在分析关于审稿人一致性的名义或有序水平数据时,优于 Cohen 或 Fleiss 的 Kappa 的应用。此外,审稿人建议似乎是引文的一个糟糕的替代指标,因为例如,手稿在修改过程中会发生很大的变化。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3c65/3642182/0a532c5d1035/pone.0061401.g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验