Marsh Herbert W, Jayasinghe Upali W, Bond Nigel W
Department of Education, University of Oxford, United Kingdom.
Am Psychol. 2008 Apr;63(3):160-8. doi: 10.1037/0003-066X.63.3.160.
Peer review is a gatekeeper, the final arbiter of what is valued in academia, but it has been criticized in relation to traditional psychological research criteria of reliability, validity, generalizability, and potential biases. Despite a considerable literature, there is surprisingly little sound peer-review research examining these criteria or strategies for improving the process. This article summarizes the authors' research program with the Australian Research Council, which receives thousands of grant proposals from the social science, humanities, and science disciplines and reviews by assessors from all over the world. Using multilevel cross-classified models, the authors critically evaluated peer reviews of grant applications and potential biases associated with applicants, assessors, and their interaction (e.g., age, gender, university, academic rank, research team composition, nationality, experience). Peer reviews lacked reliability, but the only major systematic bias found involved the inflated, unreliable, and invalid ratings of assessors nominated by the applicants themselves. The authors propose a new approach, the reader system, which they evaluated with psychology and education grant proposals and found to be substantially more reliable and strategically advantageous than traditional peer reviews of grant applications.
同行评审是学术领域价值的守门人和最终仲裁者,但它在传统心理学研究的可靠性、有效性、普遍性和潜在偏见等标准方面受到了批评。尽管有大量文献,但令人惊讶的是,很少有可靠的同行评审研究来检验这些标准或改进评审过程的策略。本文总结了作者与澳大利亚研究理事会的研究项目,该理事会收到来自社会科学、人文和科学学科的数千份资助申请,并由来自世界各地的评审人员进行评审。作者使用多层次交叉分类模型,对资助申请的同行评审以及与申请人、评审人员及其互动相关的潜在偏见(如年龄、性别、大学、学术排名、研究团队组成、国籍、经验)进行了批判性评估。同行评审缺乏可靠性,但发现的唯一主要系统偏见涉及申请人自己提名的评审人员给出的过高、不可靠和无效的评分。作者提出了一种新方法——读者系统,他们用心理学和教育资助申请对其进行了评估,发现该系统比传统的资助申请同行评审更可靠,在策略上更具优势。