Machine Learning Department, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States of America.
New Economic School, Moscow, Russia.
PLoS One. 2024 Apr 10;19(4):e0300710. doi: 10.1371/journal.pone.0300710. eCollection 2024.
How do author perceptions match up to the outcomes of the peer-review process and perceptions of others? In a top-tier computer science conference (NeurIPS 2021) with more than 23,000 submitting authors and 9,000 submitted papers, we surveyed the authors on three questions: (i) their predicted probability of acceptance for each of their papers, (ii) their perceived ranking of their own papers based on scientific contribution, and (iii) the change in their perception about their own papers after seeing the reviews. The salient results are: (1) Authors had roughly a three-fold overestimate of the acceptance probability of their papers: The median prediction was 70% for an approximately 25% acceptance rate. (2) Female authors exhibited a marginally higher (statistically significant) miscalibration than male authors; predictions of authors invited to serve as meta-reviewers or reviewers were similarly calibrated, but better than authors who were not invited to review. (3) Authors' relative ranking of scientific contribution of two submissions they made generally agreed with their predicted acceptance probabilities (93% agreement), but there was a notable 7% responses where authors predicted a worse outcome for their better paper. (4) The author-provided rankings disagreed with the peer-review decisions about a third of the time; when co-authors ranked their jointly authored papers, co-authors disagreed at a similar rate-about a third of the time. (5) At least 30% of respondents of both accepted and rejected papers said that their perception of their own paper improved after the review process. The stakeholders in peer review should take these findings into account in setting their expectations from peer review.
作者的看法与同行评审过程的结果和他人的看法相符吗?在一个拥有超过 23000 名投稿作者和 9000 篇投稿论文的顶级计算机科学会议(NeurIPS 2021)上,我们向作者提出了三个问题:(i)他们对每篇论文的预期接受概率,(ii)他们根据科学贡献对自己论文的感知排名,以及(iii)在看到评审意见后对自己论文的看法的变化。主要结果如下:(1)作者对论文接受概率的估计大致高估了三倍:中位数预测为 70%,而接受率约为 25%。(2)女性作者的校准误差略高于(统计学上显著)男性作者;受邀担任元评审或评审员的作者的预测与邀请他们评审的作者的预测一样准确,但要好于未受邀评审的作者。(3)作者对自己提交的两份论文的科学贡献的相对排名与他们预测的接受概率大致相符(93%的一致性),但有 7%的作者预测他们更好的论文会有更差的结果。(4)作者提供的排名与同行评审决定有三分之一的时间不一致;当合著者对他们合著的论文进行排名时,合著者的意见不一致,大约有三分之一的时间是这样。(5)至少有 30%的接受和拒绝论文的受访者表示,他们在评审过程后对自己论文的看法有所改善。同行评审的利益相关者应该考虑到这些发现,以便对同行评审的期望做出调整。