Goldberg Alexander, Stelmakh Ivan, Cho Kyunghyun, Oh Alice, Agarwal Alekh, Belgrave Danielle, Shah Nihar B
School of Computer Science, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States of America.
New Economic School, Moscow, Russia.
PLoS One. 2025 Apr 2;20(4):e0320444. doi: 10.1371/journal.pone.0320444. eCollection 2025.
Is it possible to reliably evaluate the quality of peer reviews? We study this question driven by two primary motivations - incentivizing high-quality reviewing using assessed quality of reviews and measuring changes to review quality in experiments. We conduct a large scale study at the NeurIPS 2022 conference, a top-tier conference in machine learning, in which we invited (meta)-reviewers and authors to voluntarily evaluate reviews given to submitted papers. First, we conduct a randomized controlled trial to examine bias due to the length of reviews. We generate elongated versions of reviews by adding substantial amounts of non-informative content. Participants in the control group evaluate the original reviews, whereas participants in the experimental group evaluate the artificially lengthened versions. We find that lengthened reviews are scored (statistically significantly) higher quality than the original reviews. Additionally, in analysis of observational data we find that authors are positively biased towards reviews recommending acceptance of their own papers, even after controlling for confounders of review length, quality, and different numbers of papers per author. We also measure disagreement rates between multiple evaluations of the same review of 28% - 32%, which is comparable to that of paper reviewers at NeurIPS. Further, we assess the amount of miscalibration of evaluators of reviews using a linear model of quality scores and find that it is similar to estimates of miscalibration of paper reviewers at NeurIPS. Finally, we estimate the amount of variability in subjective opinions around how to map individual criteria to overall scores of review quality and find that it is roughly the same as that in the review of papers. Our results suggest that the various problems that exist in reviews of papers - inconsistency, bias towards irrelevant factors, miscalibration, subjectivity - also arise in reviewing of reviews.
是否有可能可靠地评估同行评审的质量?我们研究这个问题主要有两个动机——利用评估出的评审质量激励高质量评审,以及衡量实验中评审质量的变化。我们在机器学习顶级会议NeurIPS 2022上进行了一项大规模研究,在该研究中我们邀请(元)评审人员和作者自愿评估提交论文所收到的评审意见。首先,我们进行了一项随机对照试验,以检验因评审长度而产生的偏差。我们通过添加大量无信息内容来生成评审意见的加长版本。对照组的参与者评估原始评审意见,而实验组的参与者评估人为加长的版本。我们发现加长后的评审意见得分(在统计上显著)高于原始评审意见。此外,在对观测数据的分析中,我们发现作者对推荐其论文被接受的评审意见存在正向偏差,即使在控制了评审长度、质量以及每位作者不同论文数量的混杂因素之后。我们还测量了对同一评审意见进行多次评估之间28% - 32%的分歧率,这与NeurIPS的论文评审人员的分歧率相当。此外,我们使用质量得分的线性模型评估评审意见评估者的校准错误量,发现其与NeurIPS论文评审者的校准错误估计相似。最后,我们估计了在如何将各个标准映射到评审质量总体得分方面主观意见的变异性,发现其与论文评审中的变异性大致相同。我们的结果表明,论文评审中存在的各种问题——不一致性、对无关因素的偏差、校准错误、主观性——在评审意见的评审中也会出现。